Skip to content

Commit ee6d335

Browse files
authored
🤖 feat: add OpenAI WebSocket transport opt-in (#3241)
## Summary Adds an opt-in OpenAI WebSocket transport setting for the built-in OpenAI provider. When `webSocketTransportEnabled` is true and the effective OpenAI wire format is Responses, eligible streaming Responses API requests use `@vercel/ai-sdk-openai-websocket-fetch`; existing HTTP behavior remains the default. ## Background OpenAI's Responses WebSocket transport can reduce setup overhead for streaming, multi-step workflows, but Mux previously had no first-class provider-level opt-in. This keeps the feature scoped to the built-in OpenAI provider and preserves the saved preference when users temporarily switch to Chat Completions. ## Implementation - Adds `webSocketTransportEnabled` to provider config/status schemas and OpenAI provider settings. - Shows the WebSocket control only in Responses wire format; hides it for Chat Completions without clearing the saved value. - Composes the upstream WebSocket fetch through a small helper that preserves Mux's existing OpenAI fetch wrapper for non-eligible requests. - Attaches per-model cleanup via a Mux-owned symbol and runs cleanup from main stream and workspace title generation paths. - Updates provider factory, stream lifecycle, and settings tests for activation, gating, and cleanup behavior. ## Validation - `make static-check` - Focused tests for config/status, provider factory activation, helper behavior, stream cleanup, title cleanup, and Settings UI behavior. - Dogfooded Settings UI with `agent-browser` for default/off, enabled, Chat Completions hidden, and Responses restored states. - Created live test workspaces, sent OpenAI chat messages, and verified backend-side WebSocket open evidence: `wss://api.openai.com/v1/responses`. ## Risks The main risk is provider transport composition regressions. The implementation pre-filters non-eligible requests so Mux's existing fetch behavior remains responsible for non-WebSocket HTTP paths, and cleanup is scoped per model/run to avoid process-wide socket lifetime complexity. --- <details> <summary>📋 Implementation Plan</summary> # Implementation Plan: OpenAI WebSocket Transport Opt-In ## Goal Add a non-breaking, optional **OpenAI WebSocket Transport** setting for the **Built-in OpenAI Provider**. When `webSocketTransportEnabled` is persisted as `true` and the effective OpenAI wire format is Responses, eligible streaming Responses API requests use the published OpenAI WebSocket fetch transport. Existing HTTP behavior remains the default. ## Verified context and constraints - Product/domain decisions are already captured in `CONTEXT.md` and `PRD.md`: - canonical setting name: `webSocketTransportEnabled` - provider config only; no request-level override - exposed in Settings → Providers → OpenAI near Wire Format - inactive/disabled for Chat Completions while preserving the saved flag - no custom base URL validation - no automatic HTTP fallback after WebSocket failures - use `@vercel/ai-sdk-openai-websocket-fetch`; do not implement the WebSocket protocol locally - per-stream connection lifecycle; explicit cleanup on completion/error/cancel - no ADR for this iteration - Repo investigation found existing OpenAI-specific provider config/status/UI patterns to mirror: - `serviceTier`, `wireFormat`, and `store` in provider config/status/UI - OpenAI status values are validated before surfacing to the frontend - `ProvidersSection.tsx` already has adjacent OpenAI settings for Service tier, Wire format, and Response storage - Repo investigation found the main runtime seams: - `providerModelFactory.ts` creates OpenAI models through `createOpenAI({ ..., fetch })` - the OpenAI branch already wraps fetch for Mux headers, DevTools capture/stripping, Codex OAuth normalization/routing, and custom fetch handling - `streamManager.ts` owns the main guaranteed stream cleanup `finally` path - `workspaceTitleGenerator.ts` is another `streamText` owner using `AIService.createModel()` models - Upstream AI SDK docs confirm that OpenAI provider instances accept a custom `fetch`, `createWebSocketFetch()` is passed to `createOpenAI({ fetch })`, the package exposes `.close()`, and only streaming `POST /responses` requests use WebSocket while other requests fall through to standard fetch. ## Recommended approach **Approach A: Provider-config opt-in + small WebSocket fetch composition module + language-model cleanup symbol** Net product-code LoC estimate: **~230–360 LoC** Estimated product-code breakdown: - config/status schemas and provider service surfacing: ~20–35 LoC - Settings UI control and helpers: ~55–90 LoC - WebSocket fetch composition helper: ~55–90 LoC - language-model cleanup helper: ~35–55 LoC - provider factory integration: ~35–60 LoC - stream-owner cleanup integration: ~20–30 LoC Why this approach: - keeps the existing `createModel()` return API stable - isolates protocol package composition behind a small deep module - preserves existing OpenAI fetch behavior instead of naively replacing fetch - gives deterministic test seams for enablement and cleanup - avoids process-wide socket caching, URL validation, fallback retries, or other speculative complexity Rejected alternatives: - **Process-wide cached WebSocket connections**: more latency upside across separate user messages but requires cache keys, config invalidation, key rotation handling, and app shutdown cleanup. Product-code estimate if chosen later: ~180–300 additional LoC. - **Change `createModel()` to return `{ model, cleanup }`**: explicit but high-churn across call sites and tests. Product-code estimate: ~120–220 LoC plus broad type/test churn. - **Implement the WebSocket protocol locally**: maximum control but duplicates upstream transport behavior and beta protocol maintenance. Product-code estimate: ~220–400 LoC plus higher maintenance risk. ## Implementation phases ### Phase 0 — Documentation alignment 1. Keep `CONTEXT.md` as the canonical glossary and decision summary for this feature. - Preserve the terms **Built-in OpenAI Provider**, **Direct OpenAI API Key Path**, **OpenAI WebSocket Transport**, and `webSocketTransportEnabled`. - If implementation uncovers a domain decision that changes the agreed semantics, update `CONTEXT.md` in the same change set rather than leaving the glossary stale. 2. Keep `PRD.md` aligned with the implemented scope. - It should continue to describe the feature as a non-breaking provider-config opt-in. - Update it if implementation materially changes accepted behavior, package name, acceptance criteria, or dogfooding requirements. 3. Do not create an ADR unless implementation introduces a hard-to-reverse architectural decision beyond the current per-stream cleanup-symbol approach. Quality gate after Phase 0: - Confirm `CONTEXT.md` and `PRD.md` mention the current package name, `@vercel/ai-sdk-openai-websocket-fetch`, before implementation begins. - Confirm later implementation changes do not contradict the glossary or PRD acceptance criteria. ### Phase 1 — Dependency and schema/status plumbing 1. Add `@vercel/ai-sdk-openai-websocket-fetch` using Bun. - Use `bun add @vercel/ai-sdk-openai-websocket-fetch` so `package.json` and lockfile remain consistent. - Keep the dependency in normal dependencies, not dev dependencies, because runtime provider creation uses it. 2. Add `webSocketTransportEnabled: z.boolean().optional()` to the **Built-in OpenAI Provider** config schema. - Place it near existing OpenAI-only fields such as `serviceTier`, `defaultModel`, `apiVersion`, and other persisted OpenAI settings. - Do not add it to request/provider options schemas; this is intentionally provider config only. 3. Add `webSocketTransportEnabled?: boolean` to provider-status/oRPC schema output. - Place it near `wireFormat` and `store` because the settings UI consumes these together. 4. Surface valid persisted values from the provider service. - Mirror the `store` boolean pattern: only copy the value into provider status when `typeof config.webSocketTransportEnabled === "boolean"`. - Invalid persisted values should be omitted from status rather than surfaced to UI. Quality gate after Phase 1: - Run targeted config/provider tests that cover provider schema and provider service status. - Expected tests to extend: - provider config schema tests - provider status/oRPC schema conformance tests - provider service tests for OpenAI-only fields ### Phase 2 — Settings UI control 1. Add the OpenAI provider settings control near Wire Format / Response storage. - Label: **WebSocket transport**. - Use risk-aware helper copy, e.g. "Experimental: uses OpenAI's Responses WebSocket transport for streaming Responses API requests. Unsupported endpoints may fail." - Avoid tests that assert exact prose; the prose can evolve. 2. Persist changes through the existing provider config mutation API. - Enable: set `keyPath: ["webSocketTransportEnabled"]`, `value: true`. - Disable: prefer setting `value: ""` to remove the field if existing provider config mutation semantics treat empty string as delete; otherwise set `false` only if that is the established boolean-toggle convention. Verify the current `setConfig` behavior before implementing this detail. - Optimistically update the local provider config state with the chosen value so the UI responds immediately. 3. Disable the control while effective OpenAI wire format is Chat Completions. - Use the same effective default as the existing Wire Format control: missing wire format means Responses. - Preserve the saved `webSocketTransportEnabled` value while disabled. - Show disabled helper text such as "Only available with Responses wire format." Quality gate after Phase 2: - Run targeted Settings UI tests. - Verify behavior, not copy: - control is visible for the built-in OpenAI provider - control persists enable/disable through `setProviderConfig` - control is disabled when `wireFormat === "chatCompletions"` - selecting Chat Completions does not delete the saved WebSocket preference ### Phase 3 — Deep module: OpenAI WebSocket fetch composition Create a small node-side helper module for WebSocket transport composition. Responsibilities: 1. Accept the existing Mux OpenAI fetch as its base/fallback behavior. 2. Accept an `enabled` boolean that has already applied runtime eligibility (`webSocketTransportEnabled === true` and effective wire format is Responses). 3. When disabled, return the original fetch and a no-op close hook. 4. When enabled, create a WebSocket fetch via `createWebSocketFetch()` and return: - a fetch compatible with `createOpenAI({ fetch })` - a close hook that calls the WebSocket fetch's `.close()` exactly once 5. Preserve existing Mux OpenAI fetch behavior. - Existing request shaping/normalization must still run. - Existing HTTP fallthrough from the WebSocket package should still benefit from Mux's fetch behavior where possible. - If preserving the package's HTTP fallthrough requires a wrapper around global fetch, keep that wrapper local and heavily tested; do not reimplement the WebSocket protocol. 6. Do not catch WebSocket transport failures to retry over HTTP. - Let eligible request failures surface naturally. Important implementation detail to verify while coding: - The published package falls through to `globalThis.fetch` for non-WebSocket requests. If using it directly would bypass Mux's base fetch for HTTP fallthrough, compose a wrapper so non-eligible requests still call Mux's base fetch. Keep this wrapper simple and test it with mocked fetches. Suggested public interface shape: - `createOpenAIWebSocketTransportFetch({ enabled, baseFetch }): { fetch: typeof fetch; close: () => void }` - The helper should assert that `close` is callable when enabled and should make cleanup idempotent. Quality gate after Phase 3: - Add direct unit tests for the helper using a mocked `@vercel/ai-sdk-openai-websocket-fetch` package. - Assert externally observable behavior: - disabled returns base-fetch behavior and no-op close - enabled delegates eligible requests to the WebSocket fetch - non-eligible requests preserve base-fetch behavior - close is idempotent and does not throw on repeated calls ### Phase 4 — Deep module: language-model cleanup helper Create a Mux-owned cleanup helper for provider-created language models. Responsibilities: 1. Attach cleanup to a model object without changing the provider model factory return type. 2. Use a private Symbol so the attachment does not collide with AI SDK/provider fields. 3. Assert the attached cleanup is a function. 4. Run cleanup at most once per model. 5. Swallow/log cleanup exceptions so cleanup failures do not mask the original stream completion/error. 6. Clear the cleanup after running to avoid retaining closures longer than necessary. Suggested public interface shape: - `attachLanguageModelCleanup(model, cleanup): LanguageModel` - `runLanguageModelCleanup(model): void` Quality gate after Phase 4: - Unit tests for the helper: - cleanup runs exactly once - repeated cleanup is a no-op - models without cleanup are safe - thrown cleanup errors are handled according to the chosen helper contract ### Phase 5 — Provider model factory integration 1. In the OpenAI branch, compute runtime eligibility: - persisted/provider config `webSocketTransportEnabled === true` - effective wire format is Responses - no request-level override support 2. Keep existing config-to-provider-options logic for `serviceTier`, `wireFormat`, and `store` unchanged. 3. Compose the existing OpenAI fetch with the WebSocket helper before passing `fetch` to `createOpenAI`. - Do not bypass existing `fetchWithOpenAICodexNormalization` behavior. - Do not add a special Codex OAuth guard beyond the agreed Responses-wire-format gating. - Do not validate custom base URLs. 4. After creating the model (`provider.responses(modelId)` or `provider.chat(modelId)`), attach the close hook only when the helper created an active WebSocket cleanup. 5. Ensure DevTools middleware wrapping does not discard cleanup. - If cleanup is attached before `wrapLanguageModel`, verify whether wrapping preserves object identity/metadata. - If wrapping loses the symbol, attach cleanup after final wrapping, or copy cleanup from inner to outer model. - Add a test for the DevTools-enabled path if this is ambiguous during implementation. Quality gate after Phase 5: - Provider model factory tests: - Responses + enabled activates WebSocket composition - Responses + missing/false setting does not activate it - Chat Completions + enabled does not activate it - invalid config value is not treated as enabled - custom base URL does not prevent activation when enabled + Responses - Codex OAuth is not specially guarded; the code path follows the same eligibility rule ### Phase 6 — Stream owner cleanup integration 1. Main streams (`streamManager`): call `runLanguageModelCleanup(streamInfo.request.model)` or equivalent model reference in the existing guaranteed cleanup `finally` block. - Prefer the actual `LanguageModel` object, not the model string. - Run cleanup before deleting stream state. - Make cleanup safe for retry paths: if a stream is reset for an internal retry, do not close the WebSocket before the final stream run completes unless a new stream/model is created. 2. Workspace title/name generation: wrap each candidate's `streamText` attempt in `try/finally` and call cleanup for that candidate's model. - Ensure cleanup runs when the model does not call the expected tool and the loop continues. - Ensure cleanup runs when `streamText` or `toolResults` throws and the loop tries the next candidate. 3. Search for any other `streamText` owners using provider-created models before finalizing. - Current exploration found main stream manager and workspace title generation. - If new owners appear, apply the same cleanup pattern. Quality gate after Phase 6: - Lifecycle tests: - main stream completion closes once - main stream error closes once - main stream cancellation closes once - title generation success closes once - title generation failure/retry closes once per candidate model - internal multi-step/tool-calling stream does not close between steps ### Phase 7 — Validation and full static checks Run validation in increasing scope: 1. Targeted tests added/modified in phases 1–6. 2. Typecheck. 3. Lint/fmt checks. 4. Full static check if the targeted suite and typecheck pass. Suggested commands: - `bun test src/common/config/schemas/providersConfig.test.ts` - `bun test src/common/orpc/schemas/api.test.ts` - `bun test src/node/services/providerService.test.ts` - `bun test src/node/services/providerModelFactory.test.ts` - `bun test src/node/services/streamManager.test.ts` - `bun test src/browser/features/Settings/Sections/ProvidersSection.test.tsx` - `make typecheck` - `make lint` - `make static-check` Use `run_and_report` when running multiple validation steps in one shell call, per repo guidance. ## Dogfooding plan Dogfooding is required before claiming the feature is ready. Live OpenAI runtime dogfooding is optional if credentials/endpoints are unavailable, but UI dogfooding should still run. ### Dogfood setup 1. Start an isolated dev-server environment. - Prefer `make dev-server-sandbox` for web/settings dogfooding so the run uses an isolated `MUX_ROOT` and free ports instead of the default `make dev` state. - Use `make dev-desktop-sandbox` only if Electron-specific desktop behavior must be verified. 2. Configure a test OpenAI provider. - If a real OpenAI API key is available, use it for live streaming verification. - If not, use deterministic UI-only dogfooding plus automated tests/mocks for runtime behavior. 3. Use browser/Electron automation to open Settings → Providers → OpenAI. - Use `agent-browser` or the repo's Electron automation helper. ### Dogfood scenarios 1. **Default state** - Confirm WebSocket transport is shown as disabled/off by default. - Screenshot: OpenAI settings default state. 2. **Enable in Responses mode** - Ensure Wire Format is Responses. - Enable WebSocket transport. - Confirm the UI persists the setting after refresh/reopen. - Screenshot: enabled setting in Responses mode. 3. **Chat Completions gating** - Switch Wire Format to Chat Completions. - Confirm the WebSocket control is disabled while the saved preference remains preserved. - Screenshot: disabled control in Chat Completions mode. 4. **Return to Responses** - Switch Wire Format back to Responses. - Confirm the previously saved WebSocket preference reappears as enabled. - Screenshot: restored enabled setting. 5. **Live stream, if credentials are available** - Send a short prompt with an OpenAI Responses model. - Confirm the stream completes or a WebSocket endpoint/proxy failure surfaces clearly without automatic HTTP fallback. - Interrupt/cancel one stream and then start another to check cleanup does not block subsequent streams. - Record a short video covering enable → prompt → stream/visible failure → Chat Completions disablement. ### Dogfood artifacts Attach or save: - screenshots for default, enabled, Chat Completions-disabled, and restored states - a short video recording for the end-to-end UI flow - notes on whether live OpenAI credentials were available and whether runtime streaming was verified live or by automated mocks only ## Acceptance criteria - Existing users see no behavior change unless `webSocketTransportEnabled` is explicitly set true. - Provider config accepts optional boolean `webSocketTransportEnabled` for the **Built-in OpenAI Provider**. - Provider status exposes valid boolean values and omits invalid persisted values. - OpenAI settings UI exposes the control near Wire Format with risk-aware helper copy. - UI disables the control for Chat Completions and preserves the saved value. - Runtime WebSocket activation requires `webSocketTransportEnabled === true` and effective Responses wire format. - Runtime does not validate custom base URLs for WebSocket support. - Runtime does not retry eligible WebSocket failures over HTTP. - Existing OpenAI fetch behavior is preserved around the WebSocket composition seam. - WebSocket resources close on stream completion, error, and cancellation for all provider-created-model stream owners. - Automated tests cover config/status, settings UI, provider factory activation/gating, helper behavior, and cleanup lifecycle. - Dogfooding produces screenshots and, when feasible, a video recording. ## Risks and mitigations - **Risk: WebSocket package HTTP fallthrough bypasses Mux fetch wrappers.** - Mitigation: test the composition helper with mocked eligible and non-eligible requests; ensure non-eligible/fallthrough paths use the Mux base fetch. - **Risk: cleanup symbol is lost when models are wrapped by DevTools middleware.** - Mitigation: attach cleanup to the final returned model or explicitly preserve/copy cleanup through wrapping; add a focused test if needed. - **Risk: cleanup runs too early during AI SDK multi-step streams.** - Mitigation: run cleanup only in outer stream-owner `finally`, not inside fetch response completion per step. - **Risk: cleanup misses title generation or future stream owners.** - Mitigation: search all `streamText` call sites that use provider-created models and add a helper usage pattern; consider a short code comment at the helper call explaining the invariant. - **Risk: UI tests become tautological.** - Mitigation: test behavior and state changes rather than exact prose. - **Risk: optional live dogfood cannot run without credentials.** - Mitigation: make live streaming dogfood optional, but require automated mocked runtime tests and UI screenshots. ## Handoff notes for implementation - Keep changes surgical; do not refactor unrelated provider config or settings UI code. - Prefer small deep modules over spreading package-specific logic through provider factory and stream owners. - Use defensive assertions in the helper modules for impossible assumptions, especially cleanup function type and idempotent close state. - Do not add request-level `muxProviderOptions.openai.webSocketTransportEnabled` support in this iteration. - Do not add an ADR unless the implementation discovers a hard-to-reverse architectural choice not covered by this plan. </details> --- _Generated with `mux` • Model: `openai:gpt-5.5` • Thinking: `high` • Cost: `$71.27`_ <!-- mux-attribution: model=openai:gpt-5.5 thinking=high costs=71.27 -->
1 parent d31dbc9 commit ee6d335

21 files changed

Lines changed: 1913 additions & 344 deletions

bun.lock

Lines changed: 3 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

flake.nix

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@
8383

8484
outputHashMode = "recursive";
8585
# Marker used by scripts/update_flake_hash.sh to update this hash in place.
86-
outputHash = "sha256-nSkVmS55SWfLbUIscBGMzgR2su6vIlE9GcSRDLrn4eI="; # mux-offline-cache-hash
86+
outputHash = "sha256-jHp/RsmtwHKsbrD0b86+nb+XQ75pT5y1tEeltT6hDVQ="; # mux-offline-cache-hash
8787
};
8888

8989
configurePhase = ''

package.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,7 @@
8888
"@radix-ui/react-toggle-group": "^1.1.11",
8989
"@radix-ui/react-tooltip": "^1.2.8",
9090
"@radix-ui/react-visually-hidden": "^1.2.4",
91+
"@vercel/ai-sdk-openai-websocket-fetch": "^1.0.0",
9192
"@xterm/addon-serialize": "^0.14.0",
9293
"@xterm/headless": "^6.0.0",
9394
"ai": "^6.0.72",

src/browser/features/Settings/Sections/ProvidersSection.test.tsx

Lines changed: 118 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -150,19 +150,35 @@ function patchProviderMethods(client: APIClient, providersConfig: ProvidersConfi
150150
delete providersConfig[input.provider];
151151
return Promise.resolve({ success: true as const, data: undefined });
152152
});
153+
const setProviderConfig = mock<APIClient["providers"]["setProviderConfig"]>((input) => {
154+
const provider = providersConfig[input.provider];
155+
if (provider) {
156+
const key = input.keyPath[0] as keyof ProviderConfigInfo | undefined;
157+
if (key) {
158+
if (input.value === "") {
159+
delete provider[key];
160+
} else {
161+
Object.assign(provider, { [key]: input.value });
162+
}
163+
}
164+
}
165+
return Promise.resolve({ success: true as const, data: undefined });
166+
});
153167
const onConfigChanged = mock(() => Promise.resolve(emptyConfigChangeIterator()));
154168

155169
Object.assign(client.providers, {
156170
getConfig,
157171
addCustomOpenAICompatibleProvider,
158172
removeCustomProvider,
173+
setProviderConfig,
159174
onConfigChanged,
160175
});
161176

162177
return {
163178
addCustomOpenAICompatibleProvider,
164179
getConfig,
165180
removeCustomProvider,
181+
setProviderConfig,
166182
};
167183
}
168184

@@ -318,6 +334,108 @@ describe("ProvidersSection", () => {
318334
).toBeTruthy();
319335
});
320336

337+
test("shows and persists the OpenAI WebSocket transport toggle", async () => {
338+
const view = renderProvidersSection();
339+
const openAiButton = await view.findByRole("button", { name: /^OpenAI$/ });
340+
341+
fireEvent.click(openAiButton);
342+
343+
const openAiCard = getProviderCard(openAiButton);
344+
const webSocketToggle = within(openAiCard).getByRole("switch", {
345+
name: /WebSocket transport/i,
346+
});
347+
expect(webSocketToggle).toBeTruthy();
348+
349+
fireEvent.click(webSocketToggle);
350+
351+
await waitFor(() => {
352+
expect(view.setProviderConfig).toHaveBeenCalledWith({
353+
provider: "openai",
354+
keyPath: ["webSocketTransportEnabled"],
355+
value: true,
356+
});
357+
});
358+
});
359+
360+
test("clears the OpenAI WebSocket transport preference when toggled off", async () => {
361+
const view = renderProvidersSection();
362+
view.providersConfig.openai.webSocketTransportEnabled = true;
363+
const openAiButton = await view.findByRole("button", { name: /^OpenAI$/ });
364+
365+
fireEvent.click(openAiButton);
366+
367+
const openAiCard = getProviderCard(openAiButton);
368+
const webSocketToggle = within(openAiCard).getByRole("switch", {
369+
name: /WebSocket transport/i,
370+
});
371+
372+
fireEvent.click(webSocketToggle);
373+
374+
await waitFor(() => {
375+
expect(view.setProviderConfig).toHaveBeenCalledWith({
376+
provider: "openai",
377+
keyPath: ["webSocketTransportEnabled"],
378+
value: "",
379+
});
380+
});
381+
});
382+
383+
test("hides the OpenAI WebSocket transport toggle when Codex OAuth is the active default", async () => {
384+
const view = renderProvidersSection();
385+
view.providersConfig.openai.codexOauthSet = true;
386+
view.providersConfig.openai.apiKeySet = false;
387+
view.providersConfig.openai.webSocketTransportEnabled = true;
388+
const openAiButton = await view.findByRole("button", { name: /^OpenAI$/ });
389+
390+
fireEvent.click(openAiButton);
391+
392+
const openAiCard = getProviderCard(openAiButton);
393+
expect(
394+
within(openAiCard).queryByRole("switch", {
395+
name: /WebSocket transport/i,
396+
})
397+
).toBeNull();
398+
expect(view.providersConfig.openai.webSocketTransportEnabled).toBe(true);
399+
});
400+
401+
test("hides the OpenAI WebSocket transport toggle when OpenAI uses a custom base URL", async () => {
402+
const view = renderProvidersSection();
403+
view.providersConfig.openai.baseUrl = "https://proxy.openai.test/v1";
404+
view.providersConfig.openai.webSocketTransportEnabled = true;
405+
const openAiButton = await view.findByRole("button", { name: /^OpenAI$/ });
406+
407+
fireEvent.click(openAiButton);
408+
409+
const openAiCard = getProviderCard(openAiButton);
410+
expect(
411+
within(openAiCard).queryByRole("switch", {
412+
name: /WebSocket transport/i,
413+
})
414+
).toBeNull();
415+
expect(view.providersConfig.openai.webSocketTransportEnabled).toBe(true);
416+
});
417+
418+
test("hides the OpenAI WebSocket transport toggle for Chat Completions without clearing it", async () => {
419+
const view = renderProvidersSection();
420+
view.providersConfig.openai.wireFormat = "chatCompletions";
421+
view.providersConfig.openai.webSocketTransportEnabled = true;
422+
const openAiButton = await view.findByRole("button", { name: /^OpenAI$/ });
423+
424+
fireEvent.click(openAiButton);
425+
426+
const openAiCard = getProviderCard(openAiButton);
427+
expect(
428+
within(openAiCard).queryByRole("switch", {
429+
name: /WebSocket transport/i,
430+
})
431+
).toBeNull();
432+
expect(within(openAiCard).queryByText("WebSocket transport")).toBeNull();
433+
expect(view.providersConfig.openai.webSocketTransportEnabled).toBe(true);
434+
expect(view.setProviderConfig).not.toHaveBeenCalledWith(
435+
expect.objectContaining({ keyPath: ["webSocketTransportEnabled"], value: "" })
436+
);
437+
});
438+
321439
test("shows remove only for expanded custom provider cards", async () => {
322440
const view = renderProvidersSection();
323441
const customButton = await view.findByRole("button", { name: /Acme OpenAI/ });

0 commit comments

Comments
 (0)