For the first time, three competing AI models can communicate, cross-check, and reach consensus on your code. When they agree on a risk, you know it's real. When they disagree, you've found an edge case worth investigating.
Free · 5 credits/month · No credit card required
{
"mcpServers": {
"2ndopinion": {
"command": "npx",
"args": ["-y", "2ndopinion-cli", "mcp"]
}
}
}No commands to memorize. No special syntax. Just tell your AI what you need.
Claude Code
Codex reviewed your changes. Recommendation: REVIEW
Alternative: Use parameterized queries with $1 placeholders
Done (1 tool use · 1.2k tokens · 3s)
Codex App
Have Claude and Gemini debate whether this approach is correct
Debate: 2 rounds between Claude and Gemini
Round 1 · Claude
The singleton pattern here creates a hidden global dependency. I'd recommend dependency injection instead — it makes testing straightforward and the coupling explicit.
Round 1 · Gemini
I agree the singleton is problematic for testing, but DI adds complexity for a service that's genuinely app-scoped. A module-level instance with a reset function for tests is the simpler path.
Consensus
Both models agree the singleton creates testing problems. They differ on the fix: DI vs. resettable module instance.
Join the beta — free to start
3
Models compared per consensus
12+
Tools across MCP, API, and CLI
<30s
Setup with one config paste
“The consensus feature caught a SQL injection that Claude alone missed. Three models are genuinely better than one.”
— Early beta tester
“I run this on every PR now. The debate mode between Claude and Gemini is surprisingly useful for architecture decisions.”
— Early beta tester
“Finally a code review tool built for how I actually work — inside Claude Code, not a separate browser tab.”
— Early beta tester
Powered by Claude (Anthropic) · Codex (OpenAI) · Gemini (Google)
Use the same tools from any environment.
Add to Claude Code, Cursor, or any MCP-compatible AI assistant. Tools appear natively in your AI's context.
npx 2ndopinion-cli mcp
Cross-platform HTTP gateway with API key auth. Build custom integrations, CI pipelines, or internal tooling.
POST /api/gateway/opinion
Terminal-native experience with interactive menus, watch mode, auto-fix, and PR review. One command to start.
npm i -g 2ndopinion-cli
Install our GitHub App. Every PR gets auto-reviewed with multi-model consensus. Inline comments on the exact lines that need attention.
github.com/apps/2ndopinion-dev
Every tool lets competing AIs communicate and cross-check each other — across MCP, API, and CLI.
Get a second AI opinion on any diff
/api/gateway/opinion
Code review with accept/review/reject verdicts
/api/gateway/review
Ask any model a question about your code
/api/gateway/ask
Explain changes in plain language
/api/gateway/explain
Check usage, limits, and feature access
/api/gateway/status
Generate test suites for your changes
/api/gateway/generate-tests
3-model parallel review with agreement analysis
/api/gateway/consensus
3-model bug hunting with deduplication
/api/gateway/bug-hunt
OWASP security scan with CWE references
/api/gateway/security-audit
Multi-round AI debate between 2 models
/api/gateway/debate
Marketplace skills don't just run a prompt — they run all three models with consensus, chain multi-step pipelines, and access platform intelligence that only exists inside 2ndOpinion.
Scans Django views and models for common security misconfigurations. Runs all 3 models.
Checks hook dependency arrays, ordering violations, and stale closure patterns.
Flags PHI exposure, missing encryption, and audit logging gaps in healthcare apps.
Detects breaking changes in REST and GraphQL schemas before they ship.
Built with domain expertise? Create a skill, set your price (1–20 credits), and earn 70% of every run. Django security, React hooks, HIPAA compliance — if you know it, monetize it.
SDKs, webhooks, batch processing, and streaming — everything you need to integrate.
Official JavaScript/TypeScript and Python clients with full type safety.
npm i @2ndopinion/sdk
pip install 2ndopinion
Real-time events for analysis completion, skill purchases, and usage alerts. HMAC-SHA256 signed.
analysis.completed, skill.purchased, usage.threshold
Analyze up to 20 files per request with batch mode. Stream results in real-time via SSE.
POST /api/gateway/batch · /api/gateway/opinion/stream
Write code however you like. 2ndOpinion detects your project context automatically — framework, dependencies, file structure. Every analysis is tailored to YOUR codebase, not a generic prompt.
Claude, Codex, and Gemini independently review the same diff. They're not weighted equally — 2ndOpinion tracks which model is most accurate for each language and issue type, then weights the consensus by proven performance. Known bug patterns are flagged instantly before the LLMs even run.
Get accept/review/reject verdicts with specific risks. See what's new since your last analysis (regression tracking). Auto-fix issues, generate tests, or run a security audit. Every analysis makes the next one smarter.
Most tools run one model and hope for the best. We run three, weight them by accuracy, and check against known patterns before the LLMs even start.
One model’s blind spots become your blind spots
Same generic prompt every time, regardless of language or framework
No memory of yesterday’s analysis — starts fresh every time
No way to validate if the review itself is accurate
Can’t tell you if an issue was already found 847 times by other developers
Three models cross-check — blind spots get caught by the others
Dynamic prompts tuned to your language, framework, and project context
Regression tracking: “2 new issues introduced, 1 resolved since last push”
Confidence-weighted scoring: models earn trust through proven accuracy
Pattern memory: known bugs are flagged instantly before LLMs even run
Every analysis makes the next one smarter. That's not a feature — it's a dataset no one else has.
Powered by data from every analysis run on the platform.
Diff is reviewed by 1 or 3 models
Accuracy tracked per language per category
Known bugs recognized instantly
Weighted consensus + pattern pre-check
One caught bug pays for itself.
Teams share a credit pool — when one developer uses a credit, it deducts from the team's shared balance. Every team member gets access to all Agent-tier tools including consensus, debate, auto-fix, and security audit.
Starter
$5
100 credits
Builder
$20
500 credits
Scale
$100
2,500 credits
Credit packs never expire for active accounts. Available on any plan, including Team plans — pack credits are added to the team's shared pool.
All plans include access to Claude Sonnet 4, Codex, and Gemini 1.5 Pro. Tools cost 1–7 credits depending on complexity.
Three AIs. Calibrated consensus. Pattern memory. Regression tracking. Paste this into your MCP config and start in 30 seconds.
{
"mcpServers": {
"2ndopinion": {
"command": "npx",
"args": ["-y", "2ndopinion-cli", "mcp"],
"env": {
"SECONDOPINION_API_KEY": "sk_2op_your_key_here"
}
}
}
}Paste your API key for instant setup, or omit the env block and run 2ndopinion login for JWT auth.
Building with a team? Shared credits, shared rules, and every developer makes the reviews smarter for everyone.
Start Team Plan