Back to The Times of Claw

How to Use OpenClaw with Google Gemini

OpenClaw + Google Gemini: configure DenchClaw to use Gemini 2.0 Flash or Pro as your AI model. Step-by-step setup, model selection, and tips.

Mark Rachapoom
Mark Rachapoom
·7 min read
How to Use OpenClaw with Google Gemini

OpenClaw supports Google Gemini as a first-class model provider. If you're using DenchClaw and want access to Gemini 2.0 Flash's speed or Gemini Pro's reasoning depth, this guide walks through the full configuration — from API key to running your first CRM query.

Why Use Gemini with OpenClaw?#

Google's Gemini models bring some distinct advantages to the DenchClaw agent stack:

  • Gemini 2.0 Flash is one of the fastest large models available, with low latency that makes agent loops feel snappy
  • Long context window — Gemini's models support up to 1 million tokens, meaning you can feed in large datasets or long document histories without chunking
  • Multimodal support — Gemini can process images, which opens up use cases like reading screenshots, invoices, or charts in your CRM workflows
  • Competitive pricing — Flash is extremely cost-efficient for high-volume agent tasks

If you're doing bulk data enrichment, processing large documents, or just want a fast cloud model while keeping your data architecture local, Gemini is a solid choice.

Step 1: Get a Google Gemini API Key#

  1. Go to ai.google.dev and click Get API Key
  2. Sign in with your Google account
  3. Create a new project or select an existing one
  4. Generate an API key from Google AI Studio

Copy the key — you'll need it in the next step.

If you're on a Google Cloud project and want to use Vertex AI instead of the direct API, the setup is slightly different (covered in the FAQ).

Step 2: Set the API Key in OpenClaw#

Add the Gemini API key to OpenClaw's environment:

openclaw config set apiKeys.google YOUR_API_KEY_HERE

Or set it directly in your config file (~/.openclaw/config.json):

{
  "apiKeys": {
    "google": "YOUR_API_KEY_HERE"
  }
}

You can also export it as an environment variable before running OpenClaw:

export GOOGLE_API_KEY="YOUR_API_KEY_HERE"

Step 3: Configure Gemini as Your Model#

Set Gemini as your active model:

openclaw config set model google/gemini-2.0-flash

Or in your config JSON:

{
  "model": {
    "provider": "google",
    "model": "gemini-2.0-flash-exp"
  }
}

Restart OpenClaw to apply the change:

openclaw restart

Step 4: Available Gemini Models#

Google releases Gemini models in several tiers. Here's what's relevant for DenchClaw workloads:

ModelBest ForContext WindowSpeed
gemini-2.0-flashFast agent tasks, high volume1M tokensVery fast
gemini-2.0-flash-thinkingComplex reasoning, planning1M tokensModerate
gemini-1.5-proLong document processing2M tokensModerate
gemini-1.5-flashBalanced speed/quality1M tokensFast

For most DenchClaw day-to-day use, start with gemini-2.0-flash. It handles CRM queries, Skill execution, and research tasks quickly and accurately. Switch to flash-thinking when you need the agent to reason through complex multi-step problems.

Step 5: Test Your Setup#

Run a quick test:

openclaw chat "What are the most recent entries in my CRM?"

You should see a response within 1-2 seconds with Gemini Flash. If you get an authentication error, double-check your API key is set correctly.

Using Gemini's Long Context for CRM Tasks#

One of Gemini's standout features for DenchClaw is the massive context window. This unlocks use cases that aren't practical with 8K or 16K context models:

Full pipeline analysis: Feed your entire CRM dataset into a single Gemini request and ask for patterns, anomalies, or recommendations. With 1M+ tokens, you can include thousands of contact records in context.

Document processing Skills: Use the DenchClaw Skills system to attach PDFs, transcripts, or long email threads to your CRM records. Gemini can process the full document rather than requiring chunking.

Long conversation memory: Gemini maintains coherent context across very long sessions, making extended research workflows more accurate.

To take advantage of this, make sure OpenClaw's context window is configured to match Gemini's capability:

{
  "model": {
    "provider": "google",
    "model": "gemini-2.0-flash-exp",
    "maxTokens": 32768
  }
}

Configuring Gemini for Different Tasks#

You can override the model per-session or per-command in OpenClaw:

# Use fast Flash for quick queries
openclaw chat --model google/gemini-2.0-flash "Show me all leads added this week"
 
# Use thinking model for complex analysis
openclaw chat --model google/gemini-2.0-flash-thinking "Analyze my sales pipeline and suggest the top 3 deals to prioritize"

This is useful when you want speed for routine tasks but more reasoning power for strategic analysis.

API Costs and Usage Management#

Gemini Flash is notably cost-effective. As of early 2026, it's among the lowest-cost capable models per million tokens. For typical DenchClaw usage (a few dozen agent calls per day), you'll likely stay well within Google AI Studio's free tier.

To monitor usage:

  1. Visit aistudio.google.com
  2. Check the Usage tab for your API key
  3. Set billing alerts if you're doing high-volume enrichment

For production workloads, consider moving to a Google Cloud project to get SLAs and higher rate limits.

Combining Gemini with Local Storage#

Using Gemini doesn't mean your data goes to Google's training sets. By default, Google's API terms for developer access do not use your prompts for model training. Your CRM data stored in DuckDB stays local — only the query and context you explicitly send to the API leaves your machine.

This is a key distinction from "cloud-native CRM" tools where your data permanently lives on the vendor's servers. With DenchClaw's architecture, DuckDB is the source of truth on your machine. Gemini is just the reasoning layer — it processes data on demand but doesn't store it.

If this distinction matters for your compliance requirements, review Google's API data use policy and consider using Vertex AI, which offers explicit data residency controls.

Troubleshooting#

Error: "API key not valid"

Verify the key is correctly copied (no trailing spaces). Keys are region-sensitive — make sure you're using an AI Studio key, not a Google Cloud API key from a different product.

"Rate limit exceeded" errors

The free tier has lower rate limits. For production use, add a billing account to your Google Cloud project. You can also implement request queuing in OpenClaw by reducing concurrent agent calls.

Gemini returns different output format than expected

Gemini's response format can differ slightly from OpenAI-format models. If a Skill expects OpenAI-style function calling and gets unexpected output, check if the Skill has Gemini-specific handling. Most DenchClaw Skills are model-agnostic, but some edge cases exist.

Slow responses despite Flash being fast

Check your network latency to Google's API endpoint. If you're in a region far from Google's data centers, consider using Vertex AI with a nearby region.

FAQ#

Does Gemini support function calling / tool use with OpenClaw?

Yes. Gemini 1.5 and 2.0 models support function calling, which OpenClaw uses for Skills and agent tasks. The interface is slightly different from OpenAI's tool calling format, but OpenClaw handles the translation automatically.

Can I use Vertex AI instead of Google AI Studio with OpenClaw?

Yes, but you'll need to set the base URL to your Vertex AI endpoint and use service account credentials instead of an API key. This is recommended for enterprise use with data residency requirements.

Is Gemini Flash good enough for complex CRM analysis?

For most tasks, yes. Gemini Flash is significantly more capable than its speed suggests. For very complex multi-step reasoning or nuanced analysis, gemini-2.0-flash-thinking or gemini-1.5-pro are stronger choices.

How does Gemini compare to GPT-4 for DenchClaw tasks?

Both perform well. Gemini Flash is faster and cheaper; GPT-4 has a larger ecosystem of fine-tuned variants. For straightforward CRM operations, the practical difference is minimal. See our GPT-5 guide for more on the OpenAI side.

Can I use Gemini's multimodal features in DenchClaw?

Yes, for Skills that explicitly send image data. If you build a Skill that captures screenshots or processes image attachments, Gemini will handle them natively. Standard text-based CRM tasks don't require multimodal support.

Ready to try DenchClaw? Install in one command: npx denchclaw. Full setup guide →

Mark Rachapoom

Written by

Mark Rachapoom

Building the future of AI CRM software.

Continue reading

DENCH

© 2026 DenchHQ · San Francisco, CA