The first platform where competing AI models communicate with each other

Claude, Codex, and Gemini just joined forces

For the first time, three competing AI models can communicate, cross-check, and reach consensus on your code. When they agree on a risk, you know it's real. When they disagree, you've found an edge case worth investigating.

Free · 5 credits/month · No credit card required

claude_desktop_config.json
{
  "mcpServers": {
    "2ndopinion": {
      "command": "npx",
      "args": ["-y", "2ndopinion-cli", "mcp"]
    }
  }
}

Just ask

No commands to memorize. No special syntax. Just tell your AI what you need.

Claude Code

Claude Code~/my-project
> Get a second opinion on my staged changes
secondopinion_review (diff: staged changes, llm: codex)

Codex reviewed your changes. Recommendation: REVIEW

HIGH SQL query uses string interpolation — vulnerable to injection
MEDIUM Missing error handling on async database call

Alternative: Use parameterized queries with $1 placeholders

Done (1 tool use · 1.2k tokens · 3s)

Sonnet · main · $0.02
ctx

Codex App

Codexcodex-5.2

Have Claude and Gemini debate whether this approach is correct

Using secondopinion_debate

Debate: 2 rounds between Claude and Gemini

Round 1 · Claude

The singleton pattern here creates a hidden global dependency. I'd recommend dependency injection instead — it makes testing straightforward and the coupling explicit.

Round 1 · Gemini

I agree the singleton is problematic for testing, but DI adds complexity for a service that's genuinely app-scoped. A module-level instance with a reset function for tests is the simpler path.

Consensus

Both models agree the singleton creates testing problems. They differ on the fix: DI vs. resettable module instance.

Ask for follow-up changes...

Join the beta — free to start

3

Models compared per consensus

12+

Tools across MCP, API, and CLI

<30s

Setup with one config paste

The consensus feature caught a SQL injection that Claude alone missed. Three models are genuinely better than one.

Early beta tester

I run this on every PR now. The debate mode between Claude and Gemini is surprisingly useful for architecture decisions.

Early beta tester

Finally a code review tool built for how I actually work — inside Claude Code, not a separate browser tab.

Early beta tester

Powered by Claude (Anthropic) · Codex (OpenAI) · Gemini (Google)

Integrate anywhere

Use the same tools from any environment.

MCP Server

Add to Claude Code, Cursor, or any MCP-compatible AI assistant. Tools appear natively in your AI's context.

npx 2ndopinion-cli mcp

REST API

Cross-platform HTTP gateway with API key auth. Build custom integrations, CI pipelines, or internal tooling.

POST /api/gateway/opinion

CLI

Terminal-native experience with interactive menus, watch mode, auto-fix, and PR review. One command to start.

npm i -g 2ndopinion-cli

GitHub PR Agent

Install our GitHub App. Every PR gets auto-reviewed with multi-model consensus. Inline comments on the exact lines that need attention.

github.com/apps/2ndopinion-dev

10 tools, three models, one verdict

Every tool lets competing AIs communicate and cross-check each other — across MCP, API, and CLI.

Free

Opinion

Get a second AI opinion on any diff

/api/gateway/opinion

Free

Review

Code review with accept/review/reject verdicts

/api/gateway/review

Free

Ask

Ask any model a question about your code

/api/gateway/ask

Free

Explain

Explain changes in plain language

/api/gateway/explain

Free

Status

Check usage, limits, and feature access

/api/gateway/status

Pro+

Generate Tests

Generate test suites for your changes

/api/gateway/generate-tests

Power+

Consensus

3-model parallel review with agreement analysis

/api/gateway/consensus

Power+

Bug Hunt

3-model bug hunting with deduplication

/api/gateway/bug-hunt

Power+

Security Audit

OWASP security scan with CWE references

/api/gateway/security-audit

Agent

Debate

Multi-round AI debate between 2 models

/api/gateway/debate

Build and sell custom AI tools

Marketplace skills don't just run a prompt — they run all three models with consensus, chain multi-step pipelines, and access platform intelligence that only exists inside 2ndOpinion.

SecurityConsensus
9 credits

Django Security Audit

Scans Django views and models for common security misconfigurations. Runs all 3 models.

Code ReviewConsensus
6 credits

React Hook Validator

Checks hook dependency arrays, ordering violations, and stale closure patterns.

SecurityConsensus
15 credits

HIPAA Compliance Check

Flags PHI exposure, missing encryption, and audit logging gaps in healthcare apps.

API DesignConsensus
6 credits

API Breaking Change Detector

Detects breaking changes in REST and GraphQL schemas before they ship.

Built with domain expertise? Create a skill, set your price (1–20 credits), and earn 70% of every run. Django security, React hooks, HIPAA compliance — if you know it, monetize it.

Build on the 2ndOpinion API

SDKs, webhooks, batch processing, and streaming — everything you need to integrate.

SDKs

Official JavaScript/TypeScript and Python clients with full type safety.

npm i @2ndopinion/sdk

pip install 2ndopinion

Webhooks

Real-time events for analysis completion, skill purchases, and usage alerts. HMAC-SHA256 signed.

analysis.completed, skill.purchased, usage.threshold

Batch & Streaming

Analyze up to 20 files per request with batch mode. Stream results in real-time via SSE.

POST /api/gateway/batch · /api/gateway/opinion/stream

How it works

Your AI writes code

Write code however you like. 2ndOpinion detects your project context automatically — framework, dependencies, file structure. Every analysis is tailored to YOUR codebase, not a generic prompt.

Three AIs cross-check with calibrated consensus

Claude, Codex, and Gemini independently review the same diff. They're not weighted equally — 2ndOpinion tracks which model is most accurate for each language and issue type, then weights the consensus by proven performance. Known bug patterns are flagged instantly before the LLMs even run.

Ship with confidence — and memory

Get accept/review/reject verdicts with specific risks. See what's new since your last analysis (regression tracking). Auto-fix issues, generate tests, or run a security audit. Every analysis makes the next one smarter.

Not all AI review is equal

Most tools run one model and hope for the best. We run three, weight them by accuracy, and check against known patterns before the LLMs even start.

Single AI review

One model’s blind spots become your blind spots

Same generic prompt every time, regardless of language or framework

No memory of yesterday’s analysis — starts fresh every time

No way to validate if the review itself is accurate

Can’t tell you if an issue was already found 847 times by other developers

2ndOpinion consensus

Three models cross-check — blind spots get caught by the others

Dynamic prompts tuned to your language, framework, and project context

Regression tracking: “2 new issues introduced, 1 resolved since last push”

Confidence-weighted scoring: models earn trust through proven accuracy

Pattern memory: known bugs are flagged instantly before LLMs even run

Every analysis makes the next one smarter. That's not a feature — it's a dataset no one else has.

Gets smarter with every analysis

Powered by data from every analysis run on the platform.

1

Developer runs analysis

Diff is reviewed by 1 or 3 models

2

Models are calibrated

Accuracy tracked per language per category

3

Patterns are learned

Known bugs recognized instantly

4

Next analysis is smarter

Weighted consensus + pattern pre-check

Simple pricing

One caught bug pays for itself.

Free

$0per month
  • 5 credits per month
  • Opinion + Review + Ask + Explain
  • MCP + API + CLI access
  • All three LLMs

Pro

$10per month
  • 100 credits per month
  • All Free tools
  • Generate Tests
  • MCP + API + CLI access
Best Value

Power

$25per month
  • 500 credits per month
  • All Pro features
  • Consensus mode
  • Bug Hunt + Security Audit
Most Powerful

Agent

$49per month
  • 1,000 credits per month
  • Everything in Power
  • AI Debate + Auto-Fix
  • PR Review agent

For engineering teams

Teams share a credit pool — when one developer uses a credit, it deducts from the team's shared balance. Every team member gets access to all Agent-tier tools including consensus, debate, auto-fix, and security audit.

Team

$99per month
  • 2,500 shared credits/month
  • Up to 10 seats
  • All Agent-tier tools
  • Team rules & shared patterns
  • GitHub PR reviews (org-level)
  • Team usage dashboard
Start Team Plan
Most Popular

Team Pro

$199per month
  • 6,000 shared credits/month
  • Up to 25 seats
  • Everything in Team
  • Priority support
Start Team Pro

Enterprise

Custom
  • Unlimited seats
  • SSO / SAML
  • SLA
  • Dedicated support
  • Custom integrations
Contact Us

Need more credits? Buy a pack anytime.

Starter

$5

100 credits

Builder

$20

500 credits

Scale

$100

2,500 credits

Credit packs never expire for active accounts. Available on any plan, including Team plans — pack credits are added to the team's shared pool.

All plans include access to Claude Sonnet 4, Codex, and Gemini 1.5 Pro. Tools cost 1–7 credits depending on complexity.

Your AI reviews get smarter with every push

Three AIs. Calibrated consensus. Pattern memory. Regression tracking. Paste this into your MCP config and start in 30 seconds.

claude_desktop_config.json
{
  "mcpServers": {
    "2ndopinion": {
      "command": "npx",
      "args": ["-y", "2ndopinion-cli", "mcp"],
      "env": {
        "SECONDOPINION_API_KEY": "sk_2op_your_key_here"
      }
    }
  }
}

Paste your API key for instant setup, or omit the env block and run 2ndopinion login for JWT auth.

Building with a team? Shared credits, shared rules, and every developer makes the reviews smarter for everyone.

Start Team Plan