by2ndOpinion Team

How to Generate Unit Tests Automatically with AI

AI can generate runnable unit tests directly from your git diff, covering the exact branches you changed — not just the function as a whole.

testingunit-testsaiautomationdeveloper-tools

Writing tests is the step most developers skip under deadline pressure. You ship the feature, tell yourself you will add tests later, and later never comes. The result is untested code paths that fail in production months afterward when someone changes an unrelated function.

AI unit test generation changes this equation. Instead of writing tests manually after the fact, you can generate runnable unit tests directly from your git diff — while the change is still fresh and you already understand what it does.

This guide covers how to automatically generate unit tests from code changes, with real examples and integration options for your existing workflow.

Why Generating Tests From a Diff Works Better

Most AI test tools ask you to paste a function and generate tests for it in isolation. That works, but it misses context. When you generate tests from a diff, the AI sees:

  • What changed, not just what currently exists
  • Which branches and code paths were added or modified
  • What edge cases the specific change might have introduced
  • The before and after state of the affected code

A diff-aware test generator produces tests that target the behavior you actually changed, not generic coverage of the entire file. The result is higher-signal tests that address the risk introduced by the current change.

A Real Example

Consider this diff — a developer adds a discount calculation function:

+ function calculateDiscount(price: number, couponCode: string): number {
+   if (!couponCode) return price;
+
+   const discounts: Record<string, number> = {
+     SAVE10: 0.10,
+     SAVE20: 0.20,
+     HALFOFF: 0.50,
+   };
+
+   const rate = discounts[couponCode.toUpperCase()];
+   if (!rate) return price;
+
+   return price * (1 - rate);
+ }

Running this through 2ndOpinion's generate-tests tool produces:

import { calculateDiscount } from './pricing';

describe('calculateDiscount', () => {
  it('returns the original price when no coupon code is provided', () => {
    expect(calculateDiscount(100, '')).toBe(100);
  });

  it('applies a 10% discount for SAVE10', () => {
    expect(calculateDiscount(100, 'SAVE10')).toBe(90);
  });

  it('applies a 20% discount for SAVE20', () => {
    expect(calculateDiscount(200, 'SAVE20')).toBe(160);
  });

  it('applies a 50% discount for HALFOFF', () => {
    expect(calculateDiscount(80, 'HALFOFF')).toBe(40);
  });

  it('is case-insensitive for coupon codes', () => {
    expect(calculateDiscount(100, 'save10')).toBe(90);
    expect(calculateDiscount(100, 'Save10')).toBe(90);
  });

  it('returns the original price for an unknown coupon code', () => {
    expect(calculateDiscount(100, 'INVALID')).toBe(100);
  });

  it('handles a price of zero correctly', () => {
    expect(calculateDiscount(0, 'SAVE10')).toBe(0);
  });
});

Seven tests in seconds, covering the happy path, edge cases, and the case-insensitivity behavior. Notice that the AI correctly spotted toUpperCase() in the implementation and generated a case-insensitivity test. A human skimming the diff might miss that detail.

Generating Tests via the CLI

If you are already using the 2ndOpinion CLI, generating tests from the current branch is one command:

# Generate tests for all staged changes
git diff HEAD | 2op generate-tests

# Target a specific file
git diff HEAD -- src/pricing.ts | 2op generate-tests --framework jest

# Save directly to a test file
git diff HEAD | 2op generate-tests --output src/__tests__/pricing.test.ts

The --framework flag accepts jest, vitest, pytest, go-test, and mocha. If you omit it, the tool reads your package.json or pyproject.toml to infer the right framework automatically.

Using the MCP Server in Your Editor

If you are working in Claude Code, Cursor, or another editor with MCP support, the generate_tests tool is available without leaving your editor:

generate_tests for the current diff, framework: vitest

The MCP server runs git diff in your project root, sends the diff to the API, and returns the test code as a formatted code block. No context switching, no copy-pasting between terminals. The tests appear inline where you are already working.

The MCP quickstart guide covers the full setup — it takes about five minutes.

Integrating with GitHub Actions

To automatically generate tests on every pull request and post them as a PR comment, add this workflow:

name: AI Test Generation

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  generate-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Install 2ndOpinion CLI
        run: npm install -g 2ndopinion

      - name: Generate tests for PR diff
        env:
          SECONDOPINION_API_KEY: ${{ secrets.SECONDOPINION_API_KEY }}
        run: |
          git diff origin/${{ github.base_ref }}...HEAD | \
          2op generate-tests --framework jest > generated-tests.md

      - name: Post tests as PR comment
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const tests = fs.readFileSync('generated-tests.md', 'utf8');
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `### Suggested Tests\n\n${tests}`
            });

Every PR gets a comment with generated tests the author can copy into the codebase. Reviewers see the suggested test coverage before approving. This does not force anyone to use the tests — it just eliminates the blank-page problem that causes developers to skip writing them.

What AI Test Generation Handles Well

Automatically generated tests are most reliable for:

  • Pure functions — deterministic input/output with no side effects
  • Validation logic — format checks, range validations, required field enforcement
  • Transformation functions — parsers, formatters, serializers, normalizers
  • Branching conditions — each if/else branch in the diff becomes at least one test case
  • Error and null handling — the AI reads your guard clauses and generates tests that trigger them

For these patterns, the generated tests are typically copy-paste ready. At worst they need minor adjustment to match your project's import paths or assertion style.

Python Example

The same approach works for Python diffs. Given this change:

+ def parse_iso_date(date_str: str) -> datetime | None:
+     if not date_str:
+         return None
+     try:
+         return datetime.fromisoformat(date_str)
+     except ValueError:
+         return None

The tool generates:

import pytest
from datetime import datetime
from your_module import parse_iso_date

def test_returns_none_for_empty_string():
    assert parse_iso_date("") is None

def test_returns_none_for_invalid_format():
    assert parse_iso_date("not-a-date") is None

def test_parses_valid_iso_date():
    result = parse_iso_date("2026-05-15")
    assert result == datetime(2026, 5, 15)

def test_parses_iso_datetime_with_time():
    result = parse_iso_date("2026-05-15T14:30:00")
    assert result == datetime(2026, 5, 15, 14, 30, 0)

def test_returns_none_for_none_input():
    assert parse_iso_date(None) is None

Where Human Judgment Still Matters

Generated tests are a strong starting point, not a finished test suite. Review them before committing:

  • Integration tests requiring database state or network calls need manual setup — the AI generates the test logic but cannot know your fixtures
  • Tests involving time or randomness — the generated mocks may not match your project's existing mock conventions
  • Business rule validation — confirm the test is asserting the intended behavior, not just a surface-level reading of the implementation

The goal is not to eliminate test-writing entirely. It is to eliminate the blank-page problem so that "I will add tests later" stops being a reason to skip them.

Try It Now

The fastest way to see AI unit test generation in action is the 2ndOpinion playground. Paste any diff, select "Generate Tests," and the output is ready in under 10 seconds.

Test generation costs 1 credit and is available on the Pro plan and above. New accounts can start a 7-day Pro trial — 100 credits, no commitment required, cancel anytime from the billing portal before the trial ends.