Back to The Times of Claw

Using DenchClaw with Python Scripts

DenchClaw Python integration guide: connect to DuckDB directly, use the REST API with requests, build lead enrichment scripts, and schedule with cron. Full code examples.

Mark Rachapoom
Mark Rachapoom
·10 min read
Using DenchClaw with Python Scripts

Using DenchClaw with Python Scripts

Python is the most practical language for CRM automation. Enrichment scripts, bulk imports, data cleanup, scheduled reports — Python handles all of it cleanly. DenchClaw gives you two integration points: direct DuckDB access (fastest, for local scripts) and the REST API (for scripts that run remotely or need the full API surface).

This guide covers both approaches with complete, working code examples.

Two Integration Methods#

Method 1: Direct DuckDB access#

DuckDB is DenchClaw's local database. The duckdb Python package lets you query it directly — no HTTP, no API keys, maximum speed.

Use this when: Your script runs on the same machine as DenchClaw. Best for data analysis, bulk updates, imports, and ETL tasks.

Method 2: REST API with requests#

The DenchClaw REST API runs at http://localhost:4242 when DenchClaw is active. Use the requests library to call it.

Use this when: Your script runs remotely (different machine, CI/CD, cloud functions), or you need webhooks and other API-only features.

Setup#

# Install required packages
pip install duckdb requests python-dotenv pandas

Create a .env file for your DenchClaw credentials:

DENCH_DB_PATH=/Users/yourname/.openclaw-dench/workspace/workspace.duckdb
DENCH_API_KEY=dench_live_abc123xyz...
DENCH_API_URL=http://localhost:4242/api/v1

Direct DuckDB Access#

Connect to the DenchClaw database#

import duckdb
import os
from dotenv import load_dotenv
 
load_dotenv()
 
DB_PATH = os.getenv('DENCH_DB_PATH', 
    os.path.expanduser('~/.openclaw-dench/workspace/workspace.duckdb'))
 
def get_connection():
    """Get a DuckDB connection to the DenchClaw database."""
    conn = duckdb.connect(DB_PATH, read_only=False)
    return conn
 
# Test the connection
with get_connection() as conn:
    result = conn.execute("SELECT COUNT(*) FROM entries").fetchone()
    print(f"Total entries: {result[0]}")

Query CRM data with PIVOT views#

DenchClaw includes pre-built PIVOT views that flatten the EAV schema into readable columns:

import duckdb
import pandas as pd
 
def get_leads():
    """Get all leads as a DataFrame."""
    with get_connection() as conn:
        df = conn.execute("""
            SELECT 
                name,
                email,
                company_name,
                status,
                phone,
                created_at::DATE as date_added
            FROM v_people
            WHERE status = 'Lead'
            ORDER BY created_at DESC
        """).df()
    return df
 
def get_pipeline_summary():
    """Get deal pipeline summary."""
    with get_connection() as conn:
        df = conn.execute("""
            SELECT 
                status as stage,
                COUNT(*) as deals,
                SUM(value::DECIMAL) as total_value,
                AVG(value::DECIMAL) as avg_value
            FROM v_deals
            GROUP BY status
            ORDER BY total_value DESC NULLS LAST
        """).df()
    return df
 
# Usage
leads = get_leads()
print(f"Found {len(leads)} leads")
print(leads.head())
 
pipeline = get_pipeline_summary()
print("\nPipeline:")
print(pipeline.to_string(index=False))

Update entries via DuckDB#

def update_entry_field(entry_id: str, field_name: str, value: str):
    """Update a specific field on an entry."""
    with get_connection() as conn:
        # Check if the field value exists
        existing = conn.execute("""
            SELECT id FROM entry_fields 
            WHERE entry_id = ? AND field_name = ?
        """, [entry_id, field_name]).fetchone()
        
        if existing:
            # Update existing field value
            conn.execute("""
                UPDATE entry_fields 
                SET value = ?, updated_at = NOW()
                WHERE entry_id = ? AND field_name = ?
            """, [value, entry_id, field_name])
        else:
            # Insert new field value
            conn.execute("""
                INSERT INTO entry_fields (entry_id, field_name, value, created_at, updated_at)
                VALUES (?, ?, ?, NOW(), NOW())
            """, [entry_id, field_name, value])
        
        conn.commit()
 
# Usage
update_entry_field('entry_abc123', 'status', 'Qualified')
update_entry_field('entry_abc123', 'lead_score', '85')

REST API with requests#

API client class#

import requests
import os
from typing import Optional, Dict, Any, List
from dotenv import load_dotenv
 
load_dotenv()
 
class DenchClawClient:
    def __init__(
        self, 
        api_url: str = None, 
        api_key: str = None
    ):
        self.api_url = api_url or os.getenv('DENCH_API_URL', 'http://localhost:4242/api/v1')
        self.api_key = api_key or os.getenv('DENCH_API_KEY')
        self.session = requests.Session()
        self.session.headers.update({
            'Authorization': f'Bearer {self.api_key}',
            'Content-Type': 'application/json'
        })
    
    def list_entries(
        self, 
        object_name: str, 
        limit: int = 50, 
        offset: int = 0,
        filters: Dict = None,
        search: str = None
    ) -> Dict:
        params = {'limit': limit, 'offset': offset}
        if filters:
            for key, value in filters.items():
                params[f'filter[{key}]'] = value
        if search:
            params['search'] = search
        
        response = self.session.get(
            f'{self.api_url}/objects/{object_name}/entries',
            params=params
        )
        response.raise_for_status()
        return response.json()
    
    def get_entry(self, object_name: str, entry_id: str) -> Dict:
        response = self.session.get(
            f'{self.api_url}/objects/{object_name}/entries/{entry_id}'
        )
        response.raise_for_status()
        return response.json()
    
    def create_entry(self, object_name: str, fields: Dict) -> Dict:
        response = self.session.post(
            f'{self.api_url}/objects/{object_name}/entries',
            json={'fields': fields}
        )
        response.raise_for_status()
        return response.json()
    
    def update_entry(self, object_name: str, entry_id: str, fields: Dict) -> Dict:
        response = self.session.patch(
            f'{self.api_url}/objects/{object_name}/entries/{entry_id}',
            json={'fields': fields}
        )
        response.raise_for_status()
        return response.json()
    
    def delete_entry(self, object_name: str, entry_id: str) -> bool:
        response = self.session.delete(
            f'{self.api_url}/objects/{object_name}/entries/{entry_id}'
        )
        response.raise_for_status()
        return response.json().get('deleted', False)
    
    def all_entries(self, object_name: str, filters: Dict = None) -> List[Dict]:
        """Paginate through all entries for an object."""
        all_results = []
        offset = 0
        limit = 100
        
        while True:
            response = self.list_entries(object_name, limit=limit, offset=offset, filters=filters)
            entries = response.get('entries', [])
            all_results.extend(entries)
            
            if not response.get('pagination', {}).get('has_more', False):
                break
            offset += limit
        
        return all_results
 
# Usage
client = DenchClawClient()
leads = client.list_entries('people', filters={'status': 'Lead'})
print(f"Found {leads['pagination']['total']} leads")

Lead Enrichment Script#

Here's a complete enrichment script that looks up company data for leads with missing information and updates DenchClaw:

"""
Lead enrichment script for DenchClaw.
Looks up company data using Clearbit-style APIs and updates CRM entries.
"""
 
import duckdb
import requests
import time
import logging
from typing import Optional, Dict
from dotenv import load_dotenv
import os
 
load_dotenv()
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s')
logger = logging.getLogger(__name__)
 
DB_PATH = os.getenv('DENCH_DB_PATH', 
    os.path.expanduser('~/.openclaw-dench/workspace/workspace.duckdb'))
 
def lookup_company_by_domain(domain: str) -> Optional[Dict]:
    """
    Look up company info by domain.
    Uses Clearbit's free company enrichment endpoint (or your preferred provider).
    Falls back to basic WHOIS-style lookup.
    """
    # Example using Clearbit Autocomplete (free, no auth required for basic data)
    try:
        response = requests.get(
            f'https://autocomplete.clearbit.com/v1/companies/suggest?query={domain}',
            timeout=5
        )
        if response.status_code == 200:
            results = response.json()
            if results:
                company = results[0]
                return {
                    'company_name': company.get('name'),
                    'website': f"https://{domain}",
                    'logo': company.get('logo'),
                    'industry': None  # Clearbit autocomplete doesn't include industry
                }
    except requests.exceptions.RequestException as e:
        logger.warning(f"Lookup failed for {domain}: {e}")
    
    return None
 
def extract_domain(email: str) -> Optional[str]:
    """Extract domain from email, filter out personal domains."""
    personal_domains = {'gmail.com', 'yahoo.com', 'hotmail.com', 'outlook.com', 
                        'icloud.com', 'me.com', 'protonmail.com', 'aol.com'}
    if '@' not in email:
        return None
    domain = email.split('@')[1].lower()
    return None if domain in personal_domains else domain
 
def enrich_leads():
    """Find leads with missing company data and enrich them."""
    conn = duckdb.connect(DB_PATH)
    
    # Find leads missing company information
    leads_to_enrich = conn.execute("""
        SELECT 
            e.id as entry_id,
            ef_name.value as name,
            ef_email.value as email,
            ef_company.value as company_name
        FROM entries e
        LEFT JOIN entry_fields ef_name ON e.id = ef_name.entry_id AND ef_name.field_name = 'name'
        LEFT JOIN entry_fields ef_email ON e.id = ef_email.entry_id AND ef_email.field_name = 'email'
        LEFT JOIN entry_fields ef_company ON e.id = ef_company.entry_id AND ef_company.field_name = 'company_name'
        LEFT JOIN entry_fields ef_status ON e.id = ef_status.entry_id AND ef_status.field_name = 'status'
        WHERE e.object_name = 'people'
          AND ef_status.value = 'Lead'
          AND (ef_company.value IS NULL OR ef_company.value = '')
          AND ef_email.value IS NOT NULL
        LIMIT 50
    """).fetchall()
    
    logger.info(f"Found {len(leads_to_enrich)} leads to enrich")
    
    enriched_count = 0
    failed_count = 0
    
    for entry_id, name, email, _ in leads_to_enrich:
        domain = extract_domain(email)
        if not domain:
            logger.debug(f"Skipping {email} — personal domain")
            continue
        
        logger.info(f"Enriching {name} ({email}) — domain: {domain}")
        
        company_data = lookup_company_by_domain(domain)
        
        if company_data and company_data.get('company_name'):
            # Update the entry in DenchClaw
            updates = [
                ('company_name', company_data['company_name']),
            ]
            if company_data.get('website'):
                updates.append(('website', company_data['website']))
            
            for field_name, value in updates:
                # Upsert the field value
                existing = conn.execute("""
                    SELECT id FROM entry_fields 
                    WHERE entry_id = ? AND field_name = ?
                """, [entry_id, field_name]).fetchone()
                
                if existing:
                    conn.execute("""
                        UPDATE entry_fields SET value = ?, updated_at = NOW()
                        WHERE entry_id = ? AND field_name = ?
                    """, [value, entry_id, field_name])
                else:
                    conn.execute("""
                        INSERT INTO entry_fields (entry_id, field_name, value, created_at, updated_at)
                        VALUES (?, ?, ?, NOW(), NOW())
                    """, [entry_id, field_name, value])
            
            conn.commit()
            enriched_count += 1
            logger.info(f"  → Updated: {company_data['company_name']}")
        else:
            failed_count += 1
            logger.debug(f"  → No data found for {domain}")
        
        # Rate limiting — be polite to external APIs
        time.sleep(0.5)
    
    conn.close()
    
    logger.info(f"\nEnrichment complete:")
    logger.info(f"  Enriched: {enriched_count}")
    logger.info(f"  Not found: {failed_count}")
    logger.info(f"  Total processed: {len(leads_to_enrich)}")
 
if __name__ == '__main__':
    enrich_leads()

Run it:

python enrich_leads.py

Bulk Import Script#

"""
Bulk import contacts from a CSV file into DenchClaw.
Handles deduplication and field mapping.
"""
 
import pandas as pd
import duckdb
import uuid
from datetime import datetime
import os
 
DB_PATH = os.path.expanduser('~/.openclaw-dench/workspace/workspace.duckdb')
 
def import_contacts(csv_path: str, source: str = 'CSV Import'):
    """Import contacts from CSV into DenchClaw people object."""
    df = pd.read_csv(csv_path)
    
    # Clean and normalize
    df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_')
    df = df.fillna('')
    
    conn = duckdb.connect(DB_PATH)
    
    inserted = 0
    skipped = 0
    
    for _, row in df.iterrows():
        email = str(row.get('email', '')).strip().lower()
        
        # Dedup check
        if email:
            existing = conn.execute("""
                SELECT entry_id FROM entry_fields 
                WHERE field_name = 'email' AND LOWER(value) = ?
                LIMIT 1
            """, [email]).fetchone()
            
            if existing:
                skipped += 1
                continue
        
        # Create entry
        entry_id = str(uuid.uuid4())
        now = datetime.utcnow().isoformat()
        
        conn.execute("""
            INSERT INTO entries (id, object_name, created_at, updated_at)
            VALUES (?, 'people', ?, ?)
        """, [entry_id, now, now])
        
        # Build fields
        fields = {
            'name': (str(row.get('name', '')) or 
                    f"{row.get('first_name', '')} {row.get('last_name', '')}").strip(),
            'email': email,
            'phone': str(row.get('phone', '')).strip(),
            'company_name': str(row.get('company_name', '') or row.get('company', '')).strip(),
            'job_title': str(row.get('job_title', '') or row.get('title', '')).strip(),
            'status': 'Lead',
            'source': source,
        }
        
        for field_name, value in fields.items():
            if value:
                conn.execute("""
                    INSERT INTO entry_fields (entry_id, field_name, value, created_at, updated_at)
                    VALUES (?, ?, ?, ?, ?)
                """, [entry_id, field_name, value, now, now])
        
        inserted += 1
    
    conn.commit()
    conn.close()
    
    print(f"Import complete: {inserted} inserted, {skipped} skipped (duplicates)")
 
# Usage
import_contacts('contacts.csv', source='LinkedIn')

Scheduling Python Scripts with Cron#

Simple cron setup#

# Edit your crontab
crontab -e
 
# Add these lines:
 
# Run enrichment script every morning at 8 AM
0 8 * * * /usr/bin/python3 /path/to/scripts/enrich_leads.py >> /tmp/enrich.log 2>&1
 
# Export weekly report every Monday at 9 AM
0 9 * * 1 /usr/bin/python3 /path/to/scripts/weekly_report.py >> /tmp/report.log 2>&1
 
# Run daily dedup cleanup at midnight
0 0 * * * /usr/bin/python3 /path/to/scripts/cleanup_dupes.py >> /tmp/cleanup.log 2>&1

Using a virtual environment in cron#

# If using venv, reference it explicitly
0 8 * * * /path/to/venv/bin/python /path/to/scripts/enrich_leads.py

Cron wrapper script#

#!/usr/bin/env python3
"""
cron_runner.py — wrapper that handles errors and sends notifications
"""
import subprocess
import sys
import logging
from datetime import datetime
 
logging.basicConfig(
    filename='/tmp/denchclaw-cron.log',
    level=logging.INFO,
    format='%(asctime)s %(message)s'
)
 
def run_script(script_name: str):
    """Run a script and log the result."""
    logger = logging.getLogger()
    logger.info(f"Starting {script_name}")
    
    result = subprocess.run(
        [sys.executable, script_name],
        capture_output=True,
        text=True
    )
    
    if result.returncode == 0:
        logger.info(f"SUCCESS: {script_name}\n{result.stdout}")
    else:
        logger.error(f"FAILED: {script_name}\n{result.stderr}")
        # Optionally notify via DenchClaw webhook here

FAQ#

Can I write to DenchClaw's DuckDB file while DenchClaw is running? DuckDB supports concurrent reads. For writes, use WAL (Write-Ahead Logging) mode which DenchClaw enables by default. Direct writes from Python while DenchClaw is running may occasionally conflict — prefer the REST API for concurrent write scenarios, or schedule direct writes during off-hours.

What's the difference between using duckdb.connect() and the REST API? Direct DuckDB is 10-100x faster for bulk operations (no HTTP overhead). The REST API is better for concurrent access, remote scripts, and when you need webhook/event features. For local scripts that don't need to run while DenchClaw is actively being used, direct DuckDB is the best choice.

How do I handle the EAV schema when querying directly? Use DenchClaw's PIVOT views (v_people, v_companies, v_deals) which present data as normal flat tables. Only drop to the raw entries/entry_fields tables when you need something the views don't expose.

Can I use async/await with DuchClaw's REST API? Yes, use httpx instead of requests for async support:

import httpx
async with httpx.AsyncClient() as client:
    response = await client.get('http://localhost:4242/api/v1/objects/people/entries',
        headers={'Authorization': f'Bearer {api_key}'})

What Python version is required? Python 3.8+ is recommended. The duckdb package supports Python 3.7+, but modern f-strings and type hints used in examples require 3.8+.

Ready to try DenchClaw? Install in one command: npx denchclaw. Full setup guide →

Mark Rachapoom

Written by

Mark Rachapoom

Building the future of AI CRM software.

Continue reading

DENCH

© 2026 DenchHQ · San Francisco, CA