Skip to content

docs: add blog post — How We Test TanStack AI Across 7 Providers#823

Open
AlemTuzlak wants to merge 6 commits intoTanStack:mainfrom
AlemTuzlak:blog/how-we-test-tanstack-ai-across-7-providers
Open

docs: add blog post — How We Test TanStack AI Across 7 Providers#823
AlemTuzlak wants to merge 6 commits intoTanStack:mainfrom
AlemTuzlak:blog/how-we-test-tanstack-ai-across-7-providers

Conversation

@AlemTuzlak
Copy link
Copy Markdown

@AlemTuzlak AlemTuzlak commented Apr 13, 2026

Summary

  • Adds new blog post: "How We Test TanStack AI Across 7 Providers on Every PR"
  • Covers the E2E testing infrastructure: 137 deterministic tests across 7 LLM providers using aimock, Playwright, and TanStack Start
  • Includes cover image and callouts to TanStack AI and the aimock repo

Test plan

  • Verify the post renders correctly at /blog/how-we-test-tanstack-ai-across-7-providers
  • Check cover image displays as header on the blog index
  • Confirm markdown renders properly (code blocks, tables, mermaid diagram, links)

Summary by CodeRabbit

  • Documentation
    • Published a new blog post describing end-to-end testing across seven LLM providers: deterministic fixture-based responses, per-test isolation for parallel runs, fixture matching for multi-step tool sequences, an optional recording mode for generating/updating fixtures, a provider support matrix with graceful skipping, and quantitative coverage (137 tests across 17 features) plus local run instructions.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 13, 2026

📝 Walkthrough

Walkthrough

New blog post describing an E2E testing setup for TanStack AI: Playwright drives a TanStack Start harness app, provider adapters are routed to an aimock fixture server, per-test X-Test-Id ensures parallel isolation, fixtures use sequenceIndex for multi-step flows, and an optional recording mode captures real provider traffic. (50 words)

Changes

Cohort / File(s) Summary
Blog Post
src/blog/how-we-test-tanstack-ai-across-7-providers.md
Adds a new markdown article documenting the E2E testing architecture: Playwright-driven harness, provider adapters redirected to aimock for deterministic fixture responses, per-test X-Test-Id isolation for parallel runs, fixture matching with sequenceIndex, optional recording mode, and coverage numbers (137 tests, 17 features, 7 providers).

Sequence Diagram(s)

sequenceDiagram
    participant Playwright
    participant Browser
    participant HarnessApp as "TanStack Start\n(harness)"
    participant Adapter as "Provider Adapter"
    participant AiMock as "aimock\n(fixture server)"

    Playwright->>Browser: open harness page (with X-Test-Id)
    Browser->>HarnessApp: user actions / triggers
    HarnessApp->>Adapter: provider request (forward X-Test-Id)
    Adapter->>AiMock: HTTP request (matched by X-Test-Id + sequenceIndex)
    AiMock-->>Adapter: deterministic fixture response
    Adapter-->>HarnessApp: provider response
    HarnessApp-->>Browser: render/update UI
    Browser-->>Playwright: assertions complete
Loading

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

🐰 I hopped through tests with mock delight,
Playwright guided each browser flight,
aimock hummed answers, calm and neat,
Fixtures danced in tidy sequence, sweet,
Seven providers, one rabbit's bright night.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately summarizes the main change: adding a new blog post documenting TanStack AI's E2E testing infrastructure across 7 providers, which directly corresponds to the file addition and PR objectives.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@netlify
Copy link
Copy Markdown

netlify bot commented Apr 13, 2026

👷 Deploy request for tanstack pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit fd68cd2

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/blog/how-we-test-tanstack-ai-across-7-providers.md (1)

56-57: Make quantitative claims time-tolerant to reduce future drift.

Consider wording like “currently” for test/provider/runtime counts so the post stays accurate as the suite evolves.

Proposed wording tweak
-147 tests cover 17 features across 7 providers. Here's the matrix:
+Currently, 147 tests cover 17 features across 7 providers. Here's the matrix:

-The full suite completes in about 2 minutes with parallel execution.
+The full suite currently completes in about 2 minutes with parallel execution.

-No API keys. No setup. 147 tests across 7 providers in about 2 minutes.
+No API keys. No setup. Currently: 147 tests across 7 providers in about 2 minutes.

Also applies to: 73-74, 147-147

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/blog/how-we-test-tanstack-ai-across-7-providers.md` around lines 56 - 57,
The text making quantitative claims about tests and providers is time-sensitive
and may become inaccurate as the suite evolves. To fix this, update the wording
in the relevant sections around lines 56 and 147 in the post to include terms
like "currently" or "as of now" before mentioning test counts or provider
numbers to clarify that these numbers may change over time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/blog/how-we-test-tanstack-ai-across-7-providers.md`:
- Around line 3-4: The frontmatter currently sets draft: true which prevents
publishing; to publish the post update the frontmatter by changing the draft
field to false (i.e., replace "draft: true" with "draft: false") in the blog
file so the post becomes visible; ensure you only modify the draft key and leave
published: 2026-04-13 unchanged.

---

Nitpick comments:
In `@src/blog/how-we-test-tanstack-ai-across-7-providers.md`:
- Around line 56-57: The text making quantitative claims about tests and
providers is time-sensitive and may become inaccurate as the suite evolves. To
fix this, update the wording in the relevant sections around lines 56 and 147 in
the post to include terms like "currently" or "as of now" before mentioning test
counts or provider numbers to clarify that these numbers may change over time.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: dffea76d-7c95-4a20-9cdc-f326be5546b3

📥 Commits

Reviewing files that changed from the base of the PR and between 3660110 and 4fdec29.

📒 Files selected for processing (1)
  • src/blog/how-we-test-tanstack-ai-across-7-providers.md

@netlify
Copy link
Copy Markdown

netlify bot commented Apr 13, 2026

👷 Deploy request for tanstack pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit 27e1e3a

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/blog/how-we-test-tanstack-ai-across-7-providers.md`:
- Line 14: The intro claim "147 tests" is inconsistent with the table rows that
sum to 137 (28 + 38 + 10 + 20 + 3 + 29 + 6 + 3); search for the literal "147
tests" and the test breakdown table (also referenced later in the same file
around the second occurrence) and either update the intro to "137 tests" or
adjust the table row counts so they sum to 147—ensure both occurrences (the
opening paragraph and the duplicated block around lines 56–67) are changed to
the same, corrected total.
- Line 14: Update the incorrect test count string "147 tests" to "137 tests" in
the markdown where it appears (the opening paragraph and the later summary
sentence that currently say "147 tests"), or alternatively add the missing 10
test entries to the feature matrix so the summed breakdown (28 + 38 + 10 + 20 +
3 + 29 + 6 + 3) equals the declared total; locate the occurrences by searching
for the literal "147 tests" and the feature matrix breakdown and make the change
so the displayed total matches the summed values.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c4ac1caa-1f7f-41c9-81a5-7927ef84bd4a

📥 Commits

Reviewing files that changed from the base of the PR and between 4fdec29 and b861d43.

📒 Files selected for processing (1)
  • src/blog/how-we-test-tanstack-ai-across-7-providers.md

@AlemTuzlak AlemTuzlak changed the title docs: add blog post on TanStack AI E2E testing infrastructure docs: add blog post — How We Test TanStack AI Across 7 Providers Apr 13, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/blog/how-we-test-tanstack-ai-across-7-providers.md`:
- Line 5: Update the frontmatter excerpt value to reflect the corrected test
count by changing the "excerpt:" string that currently reads "TanStack AI runs
147 deterministic E2E tests across 7 LLM providers in under 2 minutes. Here's
the testing infrastructure that makes it possible." to use "137" instead of
"147" so the excerpt matches the post content and feature matrix.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 39107961-3d02-41f6-9804-f022373ad2ec

📥 Commits

Reviewing files that changed from the base of the PR and between b861d43 and 27e1e3a.

⛔ Files ignored due to path filters (1)
  • public/blog-assets/how-we-test-tanstack-ai-across-7-providers/cover.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • src/blog/how-we-test-tanstack-ai-across-7-providers.md

title: 'How We Test TanStack AI Across 7 Providers on Every PR'
published: 2026-04-13
draft: false
excerpt: "TanStack AI runs 147 deterministic E2E tests across 7 LLM providers in under 2 minutes. Here's the testing infrastructure that makes it possible."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update the excerpt to reflect the corrected test count.

The excerpt still claims "147 deterministic E2E tests," but the post content and feature matrix were updated to 137 tests (lines 16, 58, and the table at lines 60-69). The excerpt should match the corrected count.

📝 Proposed fix
-excerpt: "TanStack AI runs 147 deterministic E2E tests across 7 LLM providers in under 2 minutes. Here's the testing infrastructure that makes it possible."
+excerpt: "TanStack AI runs 137 deterministic E2E tests across 7 LLM providers in under 2 minutes. Here's the testing infrastructure that makes it possible."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/blog/how-we-test-tanstack-ai-across-7-providers.md` at line 5, Update the
frontmatter excerpt value to reflect the corrected test count by changing the
"excerpt:" string that currently reads "TanStack AI runs 147 deterministic E2E
tests across 7 LLM providers in under 2 minutes. Here's the testing
infrastructure that makes it possible." to use "137" instead of "147" so the
excerpt matches the post content and feature matrix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant