Skip to content

Phase 1: GA Search/Extract endpoints, citations, structured output#6

Merged
NormallyGaussian merged 5 commits intomainfrom
feat/phase-1-modernization
Apr 28, 2026
Merged

Phase 1: GA Search/Extract endpoints, citations, structured output#6
NormallyGaussian merged 5 commits intomainfrom
feat/phase-1-modernization

Conversation

@NormallyGaussian
Copy link
Copy Markdown
Collaborator

@NormallyGaussian NormallyGaussian commented Apr 27, 2026

First PR from the 0.3.0 / 0.4.0 roadmap. Bumps the SDK, lights up Parallel's GA Search/Extract endpoints, surfaces citations + structured output on the chat model, and refreshes every notebook and example for 0.3.0. All 0.2.x call sites continue to work — the migration is via deprecation warnings, not breaking changes.

What's new

Endpoints + SDK

  • parallel-web bumped ^0.3.3^0.5.1.
  • ParallelSearchTool (renamed from ParallelWebSearchTool — old name still works) and ParallelExtractTool now hit client.search / client.extract (the GA /v1 paths). New params from the GA contract are surfaced: max_chars_total, client_model, session_id, location (Search). The advanced_settings envelope is built automatically from existing flat fields.
  • _client.py slimmed: deleted the four hand-rolled Parallel*Client wrappers (~150 lines) in favor of using parallel.Parallel / parallel.AsyncParallel directly.

Chat model

  • ChatParallel is the canonical name (ChatParallelWeb is a back-compat alias — same class object).
  • with_structured_output() — returns a typed pydantic object via Parallel's response_format JSON schema on lite / base / core. method="json_schema" (default), "json_mode", and "function_calling" (routed to json_schema for cross-provider compat) are all accepted. Raises a clear ValueError on model="speed" since that model silently ignores response_format. include_raw=True returns {"raw", "parsed", "parsing_error"} and properly captures parser failures.
  • Citations — research models populate AIMessage.response_metadata["basis"] with per-field citations / reasoning / confidence. interaction_id (for context chaining) and system_fingerprint are also surfaced.
  • response_metadata["model_name"] — emits the LangChain 1.x standard key (was "model").
  • ChatParallel(model="lite") actually selects lite now — pre-0.3 the Field(alias="model_name") silently swallowed the model= kwarg and forced the default "speed". Fixed; both model="lite" and the legacy model_name="lite" work via a model_validator shim.

Backward compatibility (deprecation, not breakage)

Three legacy paths kept working with DeprecationWarnings, all slated for removal in 0.4.0:

Legacy form What happens now
tool.invoke({"objective": "..."}) (no search_queries) Routes to /v1beta with a clear DeprecationWarning naming the 0.4.0 sunset.
mode="one-shot" / "agentic" / "fast" Mapped to "basic"/"advanced"/"basic" with a DeprecationWarning.
Extract.excerpts=False Accepted with a DeprecationWarning (v1 GA always returns excerpts; size is controlled via ExcerptSettings).

Tool return shapes

Unchanged from 0.2.x. tool.invoke({...}) still returns the structured dict (search) or list[dict] (extract). I tried response_format="content_and_artifact" mid-PR; the maintainer pushed back on the breakage and I reverted.

Internal cleanup

  • Both tools' _run / _arun deduped via _finalize_response, _start_text, _completion_text helpers — sync and async bodies are now ~25 lines each instead of ~50.
  • with_structured_output(include_raw=True) correctly populates parsing_error on parse failure (was a lambda-returning-None regardless of outcome).
  • py.typed now bundled in the wheel via [tool.poetry] include.
  • New SourcePolicy pydantic model in langchain_parallel._types.
  • pyproject.toml version 0.2.00.3.0.

Docs and examples (refreshed in this PR)

  • docs/chat.ipynb — switched to ChatParallel; new sections demonstrating with_structured_output() and basis citations.
  • docs/search_tool.ipynbParallelSearchTool, search_queries on every example so the notebook hits /v1 GA cleanly, "advanced" instead of legacy mode strings. The OpenAI chain demo (which needed langchain-openai + OPENAI_API_KEY) replaced with a pointer to demo_agent.ipynb.
  • docs/extract_tool.ipynb — same OpenAI-demo replacement; fixed the literal api_key="your-api-key" that was overriding $PARALLEL_API_KEY at execution time.
  • examples/*.py — all three rewritten to the GA shape with ChatParallel / ParallelSearchTool / SourcePolicy etc.
  • scripts/run_notebooks.py (new) — headless executor that skips %pip and getpass cells, then runs the rest end-to-end. poetry run python scripts/run_notebooks.py is now a release-time smoke test.

Migration

For most users, no code changes are required. The recommended-but-optional updates to silence deprecation warnings:

# Search: add search_queries (1-5 keyword strings) to use the GA endpoint.
# 0.2.x  (still works in 0.3.x with a DeprecationWarning; will break in 0.4.0)
tool.invoke({"objective": "What are the latest AI breakthroughs?"})

# 0.3.x preferred
tool.invoke({
    "search_queries": ["latest AI breakthroughs", "AI advances 2026"],
    "objective": "What are the latest AI breakthroughs?",
})

# Search mode: "one-shot"/"fast" → "basic", "agentic" → "advanced"

# Chat: prefer ChatParallel(model="lite") over the model_name= form.
# Read citations from response.response_metadata["basis"] and structured
# outputs via chat.with_structured_output(MyPydanticModel).

Full notes in CHANGELOG.md [0.3.0].

Test plan

  • poetry run ruff check langchain_parallel tests scripts examples — clean
  • poetry run ruff format ... --check — clean
  • poetry run mypy langchain_parallel + poetry run mypy tests — clean
  • poetry run pytest --disable-socket --allow-unix-socket tests/unit_tests/ — 64 pass
  • poetry run pytest tests/integration_tests/test_extract_tool.py — 10 pass
  • poetry run python scripts/run_notebooks.py — chat / search_tool / extract_tool all OK
  • All three examples/*.py run cleanly end-to-end against the live API
  • End-to-end smoke confirms backward compat: search dict return, extract list[dict] return, excerpts=True default, model="lite" actually selecting lite, model_name= alias still works, basis citations populated, with_structured_output returns typed pydantic objects

Out of scope (Phase 2)

ParallelSearchRetriever returning Documents, Task Run / Task Group / FindAll / Monitor surfaces, BYOMCP, hosted MCP toolkit, init_chat_model("parallel:speed") upstream registration. Tracked in IMPROVEMENT_PLAN.md and PR #5.

Bump parallel-web 0.3.3 -> 0.5.1 and migrate Search and Extract to the
v1 GA contract (client.search / client.extract). Surface the new GA
fields (max_chars_total, location, client_model, session_id) and pack
the prior flat settings (excerpts, fetch_policy, source_policy,
max_results) into the advanced_settings envelope. Legacy mode strings
('fast', 'one-shot', 'agentic') and objective-only calls keep working
via deprecation-warning fallback paths.

Tools now use response_format='content_and_artifact' so agents see a
compact summary string while ToolMessage.artifact carries the full
Parallel response. Direct tool.invoke({...}) returns the content
string; tool._run(...) returns (content, artifact).

ChatParallelWeb fixes:
- response_metadata uses the LangChain-1.x-standard 'model_name' key
  (was 'model'); also surfaces 'basis' (citations / reasoning /
  confidence) and 'interaction_id' on the research models.
- Add with_structured_output() routing through Parallel's
  response_format JSON schema for lite/base/core; raise a clear error
  on speed since it silently ignores the request. function_calling
  routes to json_schema for cross-provider compatibility.
- Drop the alias='model_name' on the model field that silently
  swallowed ChatParallelWeb(model='lite'); add a model_validator
  shim so existing model_name= kwargs keep working.

Slim _client.py: remove the four hand-rolled Parallel*Client wrappers;
tools now instantiate parallel.Parallel / parallel.AsyncParallel
directly. Add SourcePolicy pydantic model.

Packaging:
- pyproject version 0.2.0 -> 0.3.0
- Add include = ['langchain_parallel/py.typed'] so type info ships
  in the wheel.

Tests: rewrite unit + integration tests around the new tuple return,
the SDK 0.5 surface, the v1 endpoint, and the structured-output
method shape; add TestChatParallelWebUnitLite for the research-model
capability flags. 39 unit tests + 11 extract integration tests pass.
The biggest BC break in the previous commit was the tool return shape:
switching to response_format=content_and_artifact made tool.invoke({...})
return a string instead of the dict/list 0.2.x callers expect. Revert
that — both ParallelWebSearchTool and ParallelExtractTool now return
the structured dict/list directly, like 0.2.x.

Also restore Extract.excerpts: Union[bool, ExcerptSettings] = True so
existing extract_tool.invoke({"urls":[...], "excerpts": True}) keeps
validating. Pass-through is a no-op on the wire (v1 GA always returns
excerpts); excerpts=False is accepted with a DeprecationWarning.

Doc fixes from the review:
- README's stale, inverted `mode` description (one-shot/agentic) replaced
  with the GA basic|advanced semantics + new field table.
- README's broken Extract examples (treated tool.invoke as list[dict]
  but new return was string) work again now that we restored the dict.
- README's create_openai_functions_agent block was advertising tool-call
  agents using ChatParallelWeb, which doesn't support tool calling —
  replaced with a create_agent + Anthropic example using Parallel as a
  tool, plus a one-line "use a different LLM as the agent driver" note.
- README duplicate v0.1 changelog stub deleted.
- extract_tool.py:96 docstring claim about a legacy boolean path was
  unsupported by code; updated to describe the actual behavior.
- search_tool.py docstring example dropped the (content, artifact)
  invocation since we reverted to plain dict returns.

Bug fix: with_structured_output(include_raw=True) used to set
parsing_error=lambda _: None, which never reflected real failures.
Replaced with a try/except wrapper that captures the parser exception
and returns parsed=None, parsing_error=<exc> on failure.

Tests added (per the testing-gaps review):
- model="lite" actually selects "lite" (regression test)
- model_name="lite" back-compat shim works
- lc_attributes exposes model_name
- response_metadata round-trips basis / interaction_id /
  system_fingerprint on both AIMessage and final stream chunk
- with_structured_output rejects on speed
- with_structured_output binds the right response_format for
  json_schema, function_calling (routed to json_schema), and json_mode
- include_raw success and failure paths populate parsed/parsing_error
- SourcePolicy pydantic model and raw dict both flow through
- Top-level passthrough (max_chars_total, client_model, session_id)
- Extract full_content precedence (explicit settings beat tool-level cap;
  full_content=False omits the key)
- Extract excerpts=True is a no-op; excerpts=False emits warning
- Async error wrapping for both tools

61 unit tests + 10 extract integration tests pass; lint, format, and
mypy on src+tests all clean. End-to-end smoke against the real API
confirms backward-compat for: search dict return, extract list[dict]
return, excerpts=True default, excerpts=dict, model="lite" selecting
the research model, basis citations populated, with_structured_output
returning the typed pydantic object.
@NormallyGaussian
Copy link
Copy Markdown
Collaborator Author

Review-feedback amendment: restored full backward compatibility.

The three reviewer agents flagged a real regression in the previous commit — response_format="content_and_artifact" made tool.invoke({...}) return a string instead of the dict/list 0.2.x callers expect. That's reverted. Both tools now return the structured payload directly, exactly as in 0.2.x. The README's broken Extract examples work again as a result.

Backward-compat scenarios verified end-to-end

  • ParallelWebSearchTool.invoke({...})dict (was dict in 0.2)
  • ParallelExtractTool.invoke({"urls": […], "excerpts": True})list[dict] (was list[dict] in 0.2; excerpts: True validates)
  • ParallelExtractTool.invoke({"urls": […], "excerpts": {"max_chars_per_result": …}})list[dict] (dict form also works)
  • ChatParallelWeb(model="lite") actually selects lite (was silently ignored pre-0.3 due to Field(alias="model_name") — now genuinely fixed)
  • ChatParallelWeb(model_name="lite") still works via model_validator(mode="before") shim

Other fixes from the review

  • Reverted the dead-end (content, artifact) tuple shape and the misleading "tool-call envelope" migration snippets
  • README mode description was inverted (one-shot / agentic) — replaced with the GA basic / advanced semantics and a full new field table
  • README dead create_openai_functions_agent block (Parallel chat doesn't tool-call) replaced with a create_agent + Anthropic example using Parallel as a tool
  • README duplicate v0.1 changelog stub deleted; points at CHANGELOG.md
  • extract_tool.py:96 docstring claim about a "legacy boolean path with deprecation warning" was unsupported by code — updated to describe what actually happens
  • Bug: with_structured_output(include_raw=True) had parsing_error=lambda _: None (always None even on parse failure) — now wraps the parser in try/except and returns parsed=None, parsing_error=<exc> on failure

Tests added

22 new unit tests and one new integration test, covering behaviors the reviewer flagged as untested: model="lite" selecting lite, the model_name= back-compat shim, lc_attributes["model_name"], basis/interaction_id/system_fingerprint round-trip on both AIMessage and final stream chunk, with_structured_output rejecting speed, the response_format payload bound for json_schema/function_calling/json_mode, include_raw success and failure paths, SourcePolicy as both pydantic model and raw dict, top-level passthrough fields (max_chars_total, client_model, session_id), Extract full_content precedence (settings beat tool cap; False omits the key), Extract excerpts=True/False semantics with the new warning, and async error wrapping for both tools.

61 unit tests + 10 extract integration tests pass. ruff check / ruff format / mypy langchain_parallel / mypy tests all clean.

Deferred to follow-ups

The review surfaced ~150 lines of _run/_arun duplication that could be deduped, plus naming-consistency questions (ChatParallelWeb / ParallelWebSearchTool / ParallelExtractTool). Both worth doing — both are pure-mechanical refactors with non-trivial review surface, so they're better as their own PR than rolled into this one. Tracked in IMPROVEMENT_PLAN.md.

…iases

The v1beta-fallback path was doing two things at once: silently switching
endpoints when search_queries was missing, AND translating param shapes
(basic→one-shot, advanced→agentic, advanced_settings→flat). Both
reviewers and the maintainer pushed back — the v1 contract requires
search_queries, so honor it.

Now: pydantic validates `search_queries: list[str]` (required field) at
the input-schema layer, and the kwargs builder raises ValueError with a
migration hint pointing at the Parallel migration guide. Removes ~40
lines of legacy-mapping plumbing, removes the `endpoint` string threaded
through _run/_arun/_build_metadata, removes "endpoint": "v1" from
search_metadata.

Dedupe pass on _run/_arun:
- Search: extracted _finalize_response (response_obj -> dict + metadata),
  _start_text and _completion_text static helpers for run_manager log
  messages. Sync and async bodies are now ~25 lines each.
- Extract: same shape, with _start_text and _completion_text.
Net ~80 lines deleted; behavior unchanged.

Naming aliases (forward-compat, no breaking changes):
- ChatParallel = ChatParallelWeb
- ParallelSearchTool = ParallelWebSearchTool
- ParallelExtractTool unchanged
Both new and old names exported from __init__; both ARE the same class
object, so isinstance / serdes / snapshot tests are unaffected. README
and CHANGELOG now lead with the new canonical names; old names
documented as aliases.

Tests:
- Replaced test_run_falls_back_to_beta_when_objective_only with
  test_run_requires_search_queries asserting the ValueError + hint.
- Added two alias-identity tests (one per tool).
- Updated the integration-test fixture to pass search_queries.

63 unit tests pass; lint, format, mypy on src+tests all clean.
End-to-end smoke against the real API confirms: aliases resolve to the
same class, search_queries-missing raises a clean validation error,
search_metadata no longer carries the "endpoint" key, all happy paths
work.
@NormallyGaussian
Copy link
Copy Markdown
Collaborator Author

Deferred-work amendment.

1. Dropped the v1beta search fallback

You were right — the silent endpoint switch was strange. With v1 GA, search_queries is required and the API rejects calls without it; honoring that contract directly is cleaner than translating param shapes (basic↔one-shot, advanced_settings↔flat) just to fall back to a deprecated endpoint.

Now: pydantic validates search_queries: list[str] (required field) at the input-schema layer, so tool.invoke({"objective": "..."}) produces a clean validation error. The kwargs builder also raises ValueError with a migration hint pointing at https://docs.parallel.ai/search/search-migration-guide for direct _run callers. Removes ~40 lines of legacy-mapping plumbing, the endpoint string threaded through _run/_arun/_build_metadata, and the "endpoint": "v1" key in search_metadata.

This is a true breaking change for the narrow class of 0.2.x callers who passed only objective (silently used /v1beta). Docs and CHANGELOG now flag it explicitly under "Changed (BREAKING)" with a copy-pasteable migration snippet.

2. Deduped _run / _arun in both tools

  • Search: extracted _finalize_response, _start_text, _completion_text. Sync and async bodies are now ~25 lines each instead of ~50.
  • Extract: same shape with _start_text and _completion_text.

Net ~80 lines deleted. Behavior unchanged.

3. Forward-compat naming aliases (non-breaking)

  • ChatParallel = ChatParallelWeb
  • ParallelSearchTool = ParallelWebSearchTool
  • ParallelExtractTool (unchanged — already follows the canonical pattern)

Both names point at the same class object, so isinstance / serdes / snapshot tests are unaffected. README and CHANGELOG now lead with the new canonical names; the old names are documented as aliases that will continue to work indefinitely.

Verification

  • 63 unit tests pass; ruff check, ruff format, mypy langchain_parallel, mypy tests all clean.
  • End-to-end smoke against the live API confirms aliases resolve to the same class, the missing-search_queries case raises a clean pydantic validation error, search_metadata no longer carries the dead endpoint key, and all happy paths still work.

The CHANGELOG [0.3.0] entry is updated with the new "BREAKING" callout for the dropped fallback and the renaming notes.

Soften the previous breaking change: a 0.2.x caller passing only
`objective` no longer hits a hard ValueError. Instead the call routes
to the deprecated `/v1beta` endpoint with a DeprecationWarning that
names the sunset (0.4.0) and points at Parallel's migration guide.

This matches how legacy `mode` strings and `excerpts=False` already
behave (deprecated, not removed). The Parallel API itself supports
v1beta through at least June 2026, so we have runway.

Trade-off: re-introduces ~50 lines of legacy translation in
_build_call_kwargs (basic↔one-shot, advanced↔agentic, flat↔nested
settings). The v1beta path will be removed in 0.4.0; tracking via
CHANGELOG and via the docstring on _build_call_kwargs.

Restored:
- `search_queries: Optional[list[str]] = None` on the input schema
- v1beta branch in `_build_call_kwargs` with explicit DeprecationWarning
- `endpoint` plumbing through `_finalize_response` and `_build_metadata`
- "endpoint" key in `search_metadata` ("v1" or "v1beta") so callers can
  inspect which path was taken
- Unit test `test_run_falls_back_to_beta_when_objective_only` plus a
  new `test_run_raises_when_neither_objective_nor_queries` for the
  remaining error case

CHANGELOG: moved `search_queries`-required, legacy `mode`, and
`excerpts=False` under a single "Deprecated" section with a clear
0.4.0 sunset note. Demoted the search-queries note from BREAKING.
README: search_queries column reverted to Optional with the deprecation
note inline.

64 unit tests pass; lint, format, mypy on src+tests all clean.
End-to-end smoke against the live API confirms both paths: objective-
only routes to v1beta with the warning, search_queries+objective uses
v1 GA cleanly.
@NormallyGaussian
Copy link
Copy Markdown
Collaborator Author

Softened the search_queries change to a deprecation.

You're right — for a 0.3 minor we should be gentle. A caller doing tool.invoke({"objective": "..."}) in 0.2.x now gets a clear DeprecationWarning (not a ValueError) and the call still works via the /v1beta endpoint. The warning explicitly names the sunset (langchain-parallel 0.4.0) and points at Parallel's migration guide.

This brings the search-queries change in line with how the other migration paths behave: legacy mode strings and excerpts=False were already DeprecationWarnings, not errors.

Trade-off

Re-introduces ~50 lines of legacy translation in _build_call_kwargs (basicone-shot, advancedagentic, flat↔nested settings). That's the cost of being gentle. The path is clearly demarcated as deprecated and will be removed in 0.4.0.

CHANGELOG restructured

Created a new "Deprecated" section listing the three deprecated paths with their planned removal version:

  • Search without search_queries (→ removed in 0.4.0)
  • Legacy mode values (fast, one-shot, agentic)
  • Extract.excerpts=False

Verified

  • 64 unit tests pass; lint, format, mypy on src+tests all clean.
  • Live-API smoke shows both paths: objective-only routes to v1beta with the deprecation warning, search_queries + objective uses v1 GA cleanly. The endpoint key in search_metadata lets callers inspect which path was taken.

…ebooks.py

Notebooks (docs/):
- chat.ipynb: switch instantiation to ChatParallel + a model-menu comment.
  Add new sections demonstrating with_structured_output() (json_schema +
  pydantic) and basis citations on response_metadata.
- search_tool.ipynb: switch to ParallelSearchTool. Replace the legacy
  mode='one-shot' / 'agentic' values with 'advanced'. Add search_queries
  to every previously-objective-only example so the notebook now hits
  the GA /v1 endpoint and doesn't trigger the v1beta-fallback warning.
  Drop the OpenAI chain demo cells (they require langchain-openai +
  OPENAI_API_KEY); replace with a pointer to demo_agent.ipynb which
  already shows the agent pattern with claude-haiku-4-5.
- extract_tool.ipynb: drop the OpenAI chain demo (same reason). Strip
  the demo `api_key="your-api-key"` literal from the instantiation cell
  so the notebook actually executes against PARALLEL_API_KEY.

Examples (examples/):
- chat_example.py: ChatParallelWeb -> ChatParallel; drop model_name=
  alias usage; drop the temperature=/max_tokens= ignored-param noise.
- search_example.py: full rewrite to use ParallelSearchTool, add
  search_queries to all calls, mode='one-shot'/'agentic' -> 'basic'/
  'advanced', SourcePolicy pydantic model, and trim the
  display_metadata helper to the keys actually emitted in 0.3.0
  (search_duration_seconds, endpoint, actual_results_returned —
  removed the dead max_results_requested / source_policy_applied keys).
- extract_tool_example.py: ChatParallelWeb -> ChatParallel.

Tooling:
- scripts/run_notebooks.py: headless executor that skips %pip and
  getpass cells, then executes the rest against the real Parallel API.
  Used as a release-time smoke test. Run with:
      poetry run python scripts/run_notebooks.py
- pyproject.toml: allow `print()` in scripts/.

End-to-end verified against the live API: all three notebooks pass via
scripts/run_notebooks.py; all three examples run cleanly. 64 unit
tests still pass; lint, format, mypy clean.
@NormallyGaussian
Copy link
Copy Markdown
Collaborator Author

Refreshed docs/notebooks + examples for 0.3.0, plus a reusable runner.

You were right that I'd skipped this. Audited all six files; three of the four notebooks (everything except demo_agent.ipynb) and two of the three examples had stale code that would either fail or noisily emit deprecation warnings on 0.3.0.

Notebooks

  • chat.ipynb — instantiation switched to ChatParallel with a model-menu comment. Added two new sections demonstrating with_structured_output() (json_schema + pydantic) and basis citations on response_metadata. The kitchen-sink "ignored params" example dropped.
  • search_tool.ipynb — switched to ParallelSearchTool. Stale mode="one-shot" / "agentic" replaced with "advanced" everywhere. Added search_queries to every previously-objective-only example so the notebook now uses the GA /v1 endpoint with no deprecation warning. Dropped the OpenAI chain demo (needs langchain-openai + OPENAI_API_KEY); replaced with a pointer to demo_agent.ipynb which already shows the agent pattern with claude-haiku-4-5.
  • extract_tool.ipynb — dropped the OpenAI chain demo for the same reason. Stripped the literal api_key="your-api-key" from the instantiation cell that was overriding $PARALLEL_API_KEY and breaking execution.

Examples

  • chat_example.pyChatParallelWebChatParallel; dropped the model_name= alias usage and the temperature=/max_tokens= ignored-param noise.
  • search_example.py — full rewrite. ParallelWebSearchToolParallelSearchTool, search_queries on every call, mode="one-shot"/"agentic""basic"/"advanced", SourcePolicy pydantic model. display_metadata() helper trimmed to the keys actually emitted in 0.3.0 (the old max_results_requested / source_policy_applied keys were removed when we rewrote _build_metadata — now showing endpoint, search_duration_seconds, actual_results_returned).
  • extract_tool_example.pyChatParallelWebChatParallel. Otherwise fine.

scripts/run_notebooks.py — release-time smoke test

Headless executor that skips %pip and getpass cells, then runs the rest against the real Parallel API. Usage:

poetry run python scripts/run_notebooks.py                # all docs/*.ipynb
poetry run python scripts/run_notebooks.py docs/chat.ipynb  # specific files

Tracked in pyproject.toml (added T201 to the per-file ignore so scripts/ can print). pyproject.toml [tool.poetry.dev] doesn't depend on nbclient/nbformat/ipykernel — those are install-on-demand for the smoke runner. Worth adding to a CI release job later.

Verified

  • All three notebooks pass scripts/run_notebooks.py
  • All three examples run cleanly end-to-end against the live API
  • 64 unit tests still pass; lint, format, mypy clean

@NormallyGaussian NormallyGaussian marked this pull request as ready for review April 27, 2026 21:58
@NormallyGaussian NormallyGaussian merged commit 1efb164 into main Apr 28, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants