Humancheck

How to Integrate Humancheck with Your AI Agents

How to Integrate Humancheck with Your AI Agents

Humancheck

Humancheck

Humancheck

Adding human oversight to your AI agents doesn't have to be complicated. Humancheck provides a universal API that works with any AI framework, making it easy to integrate human-in-the-loop workflows into your existing systems.

Quick Start

1. Get Your API Key

Sign up at platform.humancheck.dev and generate an API key from your dashboard.

2. Install the SDK (Optional)

While you can use our REST API directly, we provide SDKs for popular languages:

npm install @humancheck/sdk
# or
pip install humancheck

3. Create Your First Review Request

Here's a simple example of requesting a human review:

import { Humancheck } from '@humancheck/sdk';
 
const humancheck = new Humancheck({
  apiKey: process.env.HUMANCHECK_API_KEY,
});
 
// Request human review for an uncertain decision
const review = await humancheck.createReview({
  content: {
    prompt: userPrompt,
    aiResponse: aiGeneratedResponse,
    confidence: 0.65, // Low confidence triggers review
  },
  metadata: {
    taskType: 'content-generation',
    userId: user.id,
    urgency: 'high',
  },
});
 
if (review.status === 'pending') {
  // Wait for human decision (blocking)
  const decision = await humancheck.waitForDecision(review.id);
  return decision.approved ? decision.content : null;
}

Integration Patterns

Pattern 1: Confidence-Based Escalation

Automatically escalate when AI confidence is below a threshold:

from humancheck import Humancheck
 
humancheck = Humancheck(api_key=os.getenv("HUMANCHECK_API_KEY"))
 
def process_with_fallback(prompt, ai_response, confidence):
    if confidence < 0.8:
        # Low confidence - escalate to human
        review = humancheck.create_review(
            content={
                "prompt": prompt,
                "ai_response": ai_response,
                "confidence": confidence
            },
            metadata={"task_type": "content_generation"}
        )
        
        decision = humancheck.wait_for_decision(review.id)
        return decision.content if decision.approved else None
    else:
        # High confidence - use AI response directly
        return ai_response

Pattern 2: Non-Blocking Reviews

Continue processing while human review happens in the background:

async function processWithAsyncReview(task) {
  const aiResponse = await yourAI.respond(task);
  
  // Submit for review without waiting
  const review = await humancheck.createReview({
    content: { task, aiResponse },
    metadata: { priority: 'medium' },
  });
  
  // Continue with AI response
  // Human review happens asynchronously
  return { response: aiResponse, reviewId: review.id };
  
  // Check review status later
  // const decision = await humancheck.getReview(review.id);
}

Pattern 3: LangChain Integration

Use Humancheck with LangChain agents:

from langchain.agents import AgentExecutor
from humancheck.langchain import HumancheckCallbackHandler
 
handler = HumancheckCallbackHandler(
    api_key=os.getenv("HUMANCHECK_API_KEY"),
    confidence_threshold=0.7
)
 
agent = AgentExecutor(
    agent=your_agent,
    tools=your_tools,
    callbacks=[handler]
)
 
# Agent automatically escalates uncertain decisions
result = agent.run("Process this complex query")

Pattern 4: Webhook Notifications

Get notified when reviews are completed:

// Configure webhook in Humancheck dashboard
// Then handle notifications:
 
app.post('/webhooks/humancheck', async (req, res) => {
  const { review_id, status, decision } = req.body;
  
  if (status === 'completed') {
    // Update your system with human decision
    await updateTask(review_id, {
      status: decision.approved ? 'approved' : 'rejected',
      content: decision.content,
    });
  }
  
  res.status(200).send('OK');
});

Routing Rules

Configure intelligent routing in the Humancheck dashboard:

Rule 1: Urgency-Based Routing

IF urgency == "high" AND task_type == "payment"
THEN route to "finance-team" with priority "urgent"

Rule 2: Expertise-Based Routing

IF task_type == "legal-review"
THEN route to "legal-team"
ELSE route to "general-reviewers"

Rule 3: Confidence-Based Routing

IF confidence < 0.5
THEN route to "senior-reviewers"
ELSE route to "junior-reviewers"

Best Practices

  1. Set Appropriate Confidence Thresholds: Too low and you'll overwhelm reviewers; too high and you'll miss edge cases.

  2. Use Non-Blocking for Non-Critical Paths: Don't block user-facing operations unless absolutely necessary.

  3. Provide Rich Context: Include all relevant information in review content so humans can make informed decisions.

  4. Implement Feedback Loops: Use human decisions to improve your AI models over time.

  5. Monitor Review Metrics: Track review times, approval rates, and reviewer feedback to optimize workflows.

Framework-Specific Guides

We provide detailed integration guides for:

  • REST API: Universal, works with any framework
  • LangChain/LangGraph: Native callback handlers
  • MCP (Model Context Protocol): Built-in support
  • Custom Adapters: Build your own integration

Check out our documentation for framework-specific examples and detailed API reference.

What's Next?

Ready to add human oversight to your AI agents?

  1. Sign up for a free account
  2. Read our quickstart guide
  3. Join our Discord community for support and updates

Start building safer, more accountable AI systems today!

Ready to add human oversight to your AI agents?