Skip to content

Commit e5b010d

Browse files
docs(integrations): catch up Parallel pages to langchain-parallel 0.4.0 (#3787)
## Overview Bring the Parallel integration pages current with [`langchain-parallel`](https://github.com/parallel-web/langchain-parallel) 0.4.0 (now on PyPI). 0.4.0 adds five new public surfaces (Retriever, FindAll, Task API, Monitor) and renames the two pre-0.4 classes to canonical names (`ChatParallel`, `ParallelSearchTool`; the old names remain importable as aliases). Existing pages were stuck on the 0.2.x surface. FYI - I am working with Karan and submitting this PR on behalf of Parallel Web Systems. Package: [github.com/parallel-web/langchain-parallel](https://github.com/parallel-web/langchain-parallel) · [pypi.org/project/langchain-parallel](https://pypi.org/project/langchain-parallel/) ## Type of change **Type:** New documentation page + Update existing documentation - 4 new pages: `retrievers/parallel.mdx`, `tools/parallel_findall.mdx`, `tools/parallel_task.mdx`, `tools/parallel_monitor.mdx` - 5 updated pages: `chat/parallel.mdx`, `tools/parallel_search.mdx`, `tools/parallel_extract.mdx`, `providers/parallel.mdx`, plus the index pages `tools/index.mdx` and `retrievers/index.mdx` - 1 plumbing change: `pipeline/preprocessors/link_map.py` adds `@[ClassName]` entries for the 0.4.0 classes ## Related issues/PRs - GitHub issue: n/a - Feature PR: n/a - Source package: [parallel-web/langchain-parallel](https://github.com/parallel-web/langchain-parallel) ## Checklist - [x] I have read the [contributing guidelines](README.md), including the [language policy](https://docs.langchain.com/oss/python/contributing/overview#language-policy) - [x] I have tested my changes locally using `docs dev` — built via `pipeline build`, served with `mint dev`, browsed all 8 pages in a real browser via Playwright - [x] All code examples have been tested and work correctly — every Python snippet across all 8 pages was executed live against the Parallel API (and Anthropic, for the `create_agent` chaining examples) - [x] I have used **root relative** paths for internal links (`/oss/...`) - [x] I have updated navigation in `src/docs.json` if needed — n/a for individual integration component pages (they surface through `chat/index`, `tools/index`, `retrievers/index`, `providers/parallel`); the relevant index pages and the provider page were updated to list the new entries ## Additional notes ### Detailed scope **New pages** - `retrievers/parallel.mdx` — `ParallelSearchRetriever` (`BaseRetriever`). - `tools/parallel_findall.mdx` — `ParallelFindAllTool`. - `tools/parallel_task.mdx` — unified Task API page covering `ParallelTaskRunTool`, `ParallelDeepResearch`, `ParallelTaskGroup`, `ParallelEnrichment`, plus the helpers (`parse_basis`, `build_task_spec`, `verify_webhook`, BYOMCP via `McpServer`). - `tools/parallel_monitor.mdx` — `ParallelMonitor`. **Updated pages** - `chat/parallel.mdx` — refreshed for canonical `ChatParallel`; new sections for `with_structured_output()`, citations / `interaction_id`, and a model decision table (`speed` vs `lite` / `base` / `core`). - `tools/parallel_search.mdx` — refreshed for canonical `ParallelSearchTool`; documents the GA shape (`search_queries` required, `mode='basic'`/`'advanced'`, new `SourcePolicy` pydantic model, GA forwarding params). - `tools/parallel_extract.mdx` — `Optional[ExcerptSettings]` and `full_content` precedence; cross-link to search. - `providers/parallel.mdx` — surfaces all six current Parallel components. - `tools/index.mdx`, `retrievers/index.mdx` — list entries for the new pages. ### Areas worth careful review 1. **Unified Task API page.** `ParallelTaskRunTool` is a `BaseTool`; `ParallelDeepResearch`, `ParallelTaskGroup`, and `ParallelEnrichment` are `Runnable`s/plain classes. Putting all four on one page under `/tools/` is intentional — they share the same processor menu, basis-citation shape, and webhook-verification helpers, and a typical reader picking between them benefits from comparing in one place. The page leads with a "Which surface should I use?" decision blurb and a four-row integration-details table. Happy to split if you'd rather. 2. **`ParallelMonitor` placement.** Strictly, Monitor is a stateful CRUD client (create/retrieve/list/delete/list_events/simulate_event), not a `BaseTool`. We placed it under `/tools/` because that's the most discoverable home in the current integration taxonomy. 3. **`@[ClassName]` link-map entries for the new 0.4.0 surfaces** point at `reference.langchain.com/python/langchain-parallel/...` paths that will only resolve once the reference site rebuilds against PyPI 0.4.0 (already published). Forward-correct, not broken in CI behavior — flagging for awareness. ### Validation - `scripts/check_cross_refs.py` — ✅ all references resolve. - `make lint_prose` (Vale) on the touched files — ✅ 0 errors, 0 warnings, 0 suggestions. - `make broken-links` — 0 of the build's broken links touch any of the changed/added Parallel files. - `pipeline/preprocessors/link_map.py` passes `ruff format --check`, `ruff check`, and `ty check` individually. - All `docs.parallel.ai/...` URLs in the new content audited live — 10/10 HTTP 200. - All 8 pages browsed in `mint dev` via Playwright: components, tables, code blocks, callouts, cross-links, output blocks render correctly. - Every Python snippet across all 8 pages exercised live against the Parallel API. This includes the multi-minute surfaces: `ParallelFindAllTool` `core` generator (38 candidates returned), `ParallelDeepResearch` `pro-fast` (default processor, structured 30-citation report), and the full `ParallelMonitor` CRUD lifecycle. The `create_agent` chaining examples were exercised against Anthropic. ### AI agent disclosure This PR was authored with substantial assistance from an AI agent (Claude). All content was reviewed by the human contributor; code examples were verified against the package source and exercised live against the Parallel API. --------- Co-authored-by: Mason Daugherty <github@mdrxy.com> Co-authored-by: Mason Daugherty <mason@langchain.dev>
1 parent cde99bb commit e5b010d

11 files changed

Lines changed: 1094 additions & 402 deletions

File tree

‎pipeline/preprocessors/link_map.py‎

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -178,9 +178,19 @@ class LinkMap(TypedDict):
178178
"ChatDeepSeek": "langchain-deepseek/chat_models/ChatDeepSeek",
179179
# langchain-parallel
180180
"langchain-parallel": "langchain-parallel/",
181+
"ChatParallel": "langchain-parallel/chat_models/ChatParallel",
181182
"ChatParallelWeb": "langchain-parallel/chat_models/ChatParallelWeb",
183+
"ParallelSearchTool": "langchain-parallel/search_tool/ParallelSearchTool",
182184
"ParallelWebSearchTool": "langchain-parallel/search_tool/ParallelWebSearchTool",
183185
"ParallelExtractTool": "langchain-parallel/extract_tool/ParallelExtractTool",
186+
"ParallelSearchRetriever": "langchain-parallel/retrievers/ParallelSearchRetriever",
187+
"ParallelFindAllTool": "langchain-parallel/findall/ParallelFindAllTool",
188+
"ParallelTaskRunTool": "langchain-parallel/tasks/ParallelTaskRunTool",
189+
"ParallelDeepResearch": "langchain-parallel/tasks/ParallelDeepResearch",
190+
"ParallelTaskGroup": "langchain-parallel/tasks/ParallelTaskGroup",
191+
"ParallelEnrichment": "langchain-parallel/tasks/ParallelEnrichment",
192+
"ParallelMonitor": "langchain-parallel/monitors/ParallelMonitor",
193+
"SourcePolicy": "langchain-parallel/_types/SourcePolicy",
184194
# langchain-amazon-nova
185195
"langchain-amazon-nova": "langchain-amazon-nova/",
186196
"ChatAmazonNova": "langchain-amazon-nova/chat_models/ChatAmazonNova",
Lines changed: 92 additions & 105 deletions
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,44 @@
11
---
2-
title: "ChatParallelWeb integration"
3-
description: "Integrate with the ChatParallelWeb chat model using LangChain Python."
2+
title: "ChatParallel integration"
3+
description: "Integrate with the ChatParallel chat model using LangChain Python."
44
---
55

6-
Parallel provides real-time web research capabilities through an OpenAI-compatible chat interface, allowing your AI applications to access current information from the web.
6+
>[Parallel](https://platform.parallel.ai/) is a real-time web search and content extraction platform built for LLMs and AI applications.
77
8-
<Tip>
9-
**API Reference**
8+
`ChatParallel` is an OpenAI-compatible chat interface to Parallel's models. The `speed` model is a low-latency conversational model with no citations; the research models (`lite`, `base`, `core`) browse the web and return per-field citations and structured output via JSON schema.
109

11-
For detailed documentation of all features and configuration options, head to the @[`ChatParallelWeb`] API reference.
12-
</Tip>
10+
<Note>
11+
`ChatParallel` is the canonical class name. The earlier `ChatParallelWeb` continues to work as an alias for the same class.
12+
</Note>
1313

1414
## Overview
1515

1616
### Integration details
1717

1818
| Class | Package | Serializable | JS/TS Support | Downloads | Latest Version |
19-
| :--- | :--- | :---: | :---: | :---: | :---: |
20-
| @[`ChatParallelWeb`] | @[`langchain-parallel`] ||| <a href="https://pypi.org/project/langchain-parallel/" target="_blank"><img src="https://static.pepy.tech/badge/langchain-parallel/month" alt="Downloads per month" noZoom height="100" class="rounded" /></a> | <a href="https://pypi.org/project/langchain-parallel/" target="_blank"><img src="https://img.shields.io/pypi/v/langchain-parallel?style=flat-square&label=%20&color=orange" alt="PyPI - Latest version" noZoom height="100" class="rounded" /></a> |
19+
| :--- | :--- | :---: | :---: | :---: | :---: |
20+
| @[`ChatParallel`] | @[`langchain-parallel`] ||| <a href="https://pypi.org/project/langchain-parallel/" target="_blank"><img src="https://static.pepy.tech/badge/langchain-parallel/month" alt="Downloads per month" noZoom height="100" class="rounded" /></a> | <a href="https://pypi.org/project/langchain-parallel/" target="_blank"><img src="https://img.shields.io/pypi/v/langchain-parallel?style=flat-square&label=%20&color=orange" alt="PyPI - Latest version" noZoom height="100" class="rounded" /></a> |
2121

2222
### Model features
2323

2424
| [Tool calling](/oss/langchain/tools) | [Structured output](/oss/langchain/structured-output) | Image input | Audio input | Video input | [Token-level streaming](/oss/langchain/streaming/) | Native async | [Token usage](/oss/langchain/models#token-usage) | [Logprobs](/oss/langchain/models#log-probabilities) |
25-
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
26-
||||||||||
25+
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
26+
|| ✅ (research models) ||||||||
27+
28+
### Choosing a model
29+
30+
| Model | Latency | Web browsing | Citations | Structured output | Use when |
31+
| :--- | :--- | :---: | :---: | :---: | :--- |
32+
| `speed` | low |||| Conversational answers from the model's parametric knowledge. |
33+
| `lite` | medium |||| Fact lookups with citations. |
34+
| `base` | medium-high |||| Mid-depth research with citations. |
35+
| `core` | higher |||| Multi-source research with citations. |
36+
37+
`speed` does not honor `response_format`, so `with_structured_output()` raises a clear error there. Use a research model when you need parsed pydantic output or per-field citations.
2738

2839
## Setup
2940

30-
To access Parallel models you'll need to install the `langchain-parallel` integration package and acquire a [Parallel](https://platform.parallel.ai) API key.
41+
To access Parallel models, install the `langchain-parallel` integration package and acquire a [Parallel](https://platform.parallel.ai) API key.
3142

3243
### Installation
3344

@@ -42,76 +53,46 @@ To access Parallel models you'll need to install the `langchain-parallel` integr
4253

4354
### Credentials
4455

45-
Head to [Parallel](https://platform.parallel.ai) to sign up and generate an API key. Once you've done this set the `PARALLEL_API_KEY` environment variable in your environment:
56+
Head to [Parallel](https://platform.parallel.ai) to sign up and generate an API key. Set `PARALLEL_API_KEY` in your environment:
4657

4758
```python
4859
import getpass
4960
import os
5061

5162
if not os.environ.get("PARALLEL_API_KEY"):
52-
os.environ["PARALLEL_API_KEY"] = getpass.getpass("Enter your Parallel API key: ")
63+
os.environ["PARALLEL_API_KEY"] = getpass.getpass("Parallel API key:\n")
5364
```
5465

5566
## Instantiation
5667

57-
Now we can instantiate our model object and generate responses. The default model is `"speed"` which provides fast responses:
58-
5968
```python
60-
from langchain_parallel import ChatParallelWeb
69+
from langchain_parallel import ChatParallel
6170

62-
llm = ChatParallelWeb(
71+
llm = ChatParallel(
6372
model="speed",
64-
# temperature=0.7,
65-
# max_tokens=None,
6673
# timeout=None,
6774
# max_retries=2,
68-
# api_key="...", # If you prefer to pass api key in directly
69-
# base_url="https://api.parallel.ai",
70-
# other params...
75+
# api_key="...", # optional if PARALLEL_API_KEY is set
76+
# base_url="https://api.parallel.ai", # default
7177
)
7278
```
7379

74-
See the @[`ChatParallelWeb`] API Reference for the full set of available model parameters.
75-
76-
<Note>
77-
**OpenAI compatibility**
78-
79-
Parallel supports many OpenAI-compatible parameters for easy migration (e.g., `response_format`, `tools`, `top_p`), though most are ignored by the Parallel API. See the [OpenAI Compatibility](#openai-compatibility) section for more details.
80-
</Note>
81-
82-
---
80+
See the @[`ChatParallel`] API reference for the full set of available parameters.
8381

8482
## Invocation
8583

8684
```python
8785
messages = [
88-
(
89-
"system",
90-
"You are a helpful assistant with access to real-time web information.",
91-
),
86+
("system", "You are a helpful assistant with access to real-time web information."),
9287
("human", "What are the latest developments in AI?"),
9388
]
9489
ai_msg = llm.invoke(messages)
95-
ai_msg
96-
```
97-
98-
```text
99-
AIMessage(content='Here\'s a summary of the latest AI news and breakthroughs as of ...', additional_kwargs={}, response_metadata={'model': 'speed', 'finish_reason': 'stop', 'created': 1764043410}, id='run--3866fa98-6ac9-4585-8d23-99c5542b582b-0')
100-
```
101-
102-
```python
10390
print(ai_msg.content)
10491
```
10592

106-
```text
107-
Here's a summary of the latest AI news and breakthroughs as of...
108-
```
109-
110-
---
111-
11293
## Chaining
11394

114-
We can chain our model with a prompt template like so:
95+
Chain the model with a prompt template:
11596

11697
```python
11798
from langchain_core.prompts import ChatPromptTemplate
@@ -120,8 +101,8 @@ prompt = ChatPromptTemplate(
120101
[
121102
(
122103
"system",
123-
"You are a helpful research assistant with access to real-time web information. "
124-
"Provide comprehensive answers about {topic} with current data.",
104+
"You are a research assistant with access to real-time web information. "
105+
"Answer questions about {topic} using current sources.",
125106
),
126107
("human", "{question}"),
127108
]
@@ -131,85 +112,101 @@ chain = prompt | llm
131112
chain.invoke(
132113
{
133114
"topic": "artificial intelligence",
134-
"question": "What are the most significant AI breakthroughs in 2025?",
115+
"question": "What are the most significant AI breakthroughs in 2026?",
135116
}
136117
)
137118
```
138119

120+
## Structured output
121+
122+
On the research models (`lite`, `base`, `core`), `ChatParallel.with_structured_output(...)` binds a JSON-schema `response_format` and returns a parsed pydantic object (or dict). Calling it on `speed` raises a `ValueError`, since `speed` silently ignores `response_format`.
123+
124+
```python
125+
from pydantic import BaseModel, Field
126+
127+
class Founder(BaseModel):
128+
name: str = Field(description="Full name of the founder")
129+
company: str = Field(description="Company they founded")
130+
131+
structured = ChatParallel(model="lite").with_structured_output(Founder)
132+
parsed = structured.invoke([("human", "Who founded SpaceX?")])
133+
print(parsed)
134+
```
135+
139136
```text
140-
AIMessage(content="Based on the provided search results, here's a summary of the significant AI breakthroughs and trends...", additional_kwargs={}, response_metadata={'model': 'speed', 'finish_reason': 'stop', 'created': 1764043419}, id='run--9c521362-6724-4299-9e65-0565ec13d997-0')
137+
name='Elon Musk' company='SpaceX'
138+
```
139+
140+
`method="json_schema"` (the default), `method="json_mode"`, and `method="function_calling"` are all accepted. Pass `include_raw=True` to receive the full `{"raw", "parsed", "parsing_error"}` envelope and capture parser failures:
141+
142+
```python
143+
structured = ChatParallel(model="lite").with_structured_output(Founder, include_raw=True)
144+
res = structured.invoke([("human", "Who founded SpaceX?")])
145+
res["parsed"] # Founder(...) or None
146+
res["parsing_error"] # Exception or None
147+
res["raw"] # original AIMessage
148+
```
149+
150+
## Citations
151+
152+
Research models populate `AIMessage.response_metadata["basis"]` with per-field citations, the model's reasoning, and a confidence label. `response_metadata["interaction_id"]` is surfaced for multi-turn context chaining; `system_fingerprint` is forwarded when present.
153+
154+
```python
155+
cited = ChatParallel(model="lite").invoke([
156+
("human", "Who is the current CEO of OpenAI? One sentence."),
157+
])
158+
print(cited.content)
159+
print("\nbasis:", cited.response_metadata.get("basis"))
160+
print("interaction_id:", cited.response_metadata.get("interaction_id"))
141161
```
142-
---
143162

144163
## Streaming
145164

146-
ChatParallelWeb supports streaming responses for real-time output:
165+
`ChatParallel` supports per-token streaming:
147166

148167
```python
149168
for chunk in llm.stream(messages):
150169
print(chunk.content, end="")
151170
```
152171

153-
---
154-
155172
## Async
156173

157-
You can also use async operations:
158-
159174
```python
160-
# Async invoke
161175
ai_msg = await llm.ainvoke(messages)
162176

163-
# Async streaming
164177
async for chunk in llm.astream(messages):
165178
print(chunk.content, end="")
166179
```
167180

168-
---
169-
170181
## Token usage
171182

172-
<Note>
173-
**No token usage tracking**
174-
175-
Parallel does not currently provide token usage metadata. The `usage_metadata` field will be `None`.
176-
</Note>
183+
Parallel does not currently provide token usage metadata. `usage_metadata` is `None`.
177184

178185
```python
179186
ai_msg = llm.invoke(messages)
180187
print(ai_msg.usage_metadata)
188+
# None
181189
```
182190

183-
```text
184-
None
185-
```
186-
187-
---
188-
189191
## Response metadata
190192

191-
Access response metadata from the API:
192-
193193
```python
194194
ai_msg = llm.invoke(messages)
195195
print(ai_msg.response_metadata)
196+
# {'model_name': 'speed', 'finish_reason': 'stop', 'created': 1764043410}
196197
```
197198

198-
```python
199-
{'model': 'speed', 'finish_reason': 'stop', 'created': 1703123456}
200-
```
201-
202-
---
199+
For research models, `response_metadata` additionally carries `basis` (per-field citations), `interaction_id` (for multi-turn chaining), and `system_fingerprint` when available.
203200

204201
## Error handling
205202

206-
The integration provides enhanced error handling for common scenarios:
203+
The integration raises `ValueError` with a descriptive message on common failure modes:
207204

208205
```python
209-
from langchain_parallel import ChatParallelWeb
206+
from langchain_parallel import ChatParallel
210207

211208
try:
212-
llm = ChatParallelWeb(api_key="invalid-key")
209+
llm = ChatParallel(api_key="invalid-key")
213210
response = llm.invoke([("human", "Hello")])
214211
except ValueError as e:
215212
if "Authentication failed" in str(e):
@@ -218,54 +215,44 @@ except ValueError as e:
218215
print("API rate limit exceeded, please try again later")
219216
```
220217

221-
---
222-
223218
## OpenAI compatibility
224219

225-
<Info>
226-
**OpenAI-compatible API**
227-
228-
ChatParallelWeb is fully compatible with many [OpenAI Chat Completions API](https://platform.openai.com/docs/api-reference/chat) parameters, making migration seamless. However, most advanced parameters (like `response_format`, `tools`, `top_p`) are accepted but ignored by the Parallel API.
229-
</Info>
220+
`ChatParallel` accepts many [OpenAI Chat Completions API](https://platform.openai.com/docs/api-reference/chat) parameters for drop-in OpenAI-client migration. Advanced parameters such as `tools`, `tool_choice`, `top_p`, and `frequency_penalty` are accepted but ignored by the Parallel API.
230221

231222
```python
232-
llm = ChatParallelWeb(
223+
llm = ChatParallel(
233224
model="speed",
234-
# These parameters are accepted but ignored by Parallel
235-
response_format={"type": "json_object"},
225+
# accepted but ignored by Parallel:
236226
tools=[{"type": "function", "function": {"name": "example"}}],
237227
tool_choice="auto",
238228
top_p=1.0,
239229
frequency_penalty=0.0,
240230
presence_penalty=0.0,
241231
logit_bias={},
242232
seed=42,
243-
user="user-123"
233+
user="user-123",
244234
)
245235
```
246236

247-
---
237+
For structured output, prefer `ChatParallel.with_structured_output(...)` (see [Structured output](#structured-output)) over passing `response_format` directly. It works on the research models and returns a parsed object.
248238

249239
## Message handling
250240

251-
The integration automatically handles message formatting and merges consecutive messages of the same type to satisfy API requirements:
241+
The integration merges consecutive messages of the same type to satisfy API requirements:
252242

253243
```python
254244
from langchain.messages import HumanMessage, SystemMessage
255245

256-
# These consecutive system messages will be automatically merged
246+
# Consecutive system messages are automatically merged before the API call.
257247
messages = [
258248
SystemMessage("You are a helpful assistant."),
259249
SystemMessage("Always be polite and concise."),
260-
HumanMessage("What is the weather like today?")
250+
HumanMessage("What is the weather like today?"),
261251
]
262252

263-
# Automatically merged to single system message before API call
264253
response = llm.invoke(messages)
265254
```
266255

267-
---
268-
269256
## API reference
270257

271-
For detailed documentation of all features and configuration options, head to the @[`ChatParallelWeb`] API reference or the [Parallel chat API quickstart](https://docs.parallel.ai/chat-api/chat-quickstart).
258+
For detailed documentation, head to the @[`ChatParallel`] API reference or the [Parallel chat API quickstart](https://docs.parallel.ai/chat-api/chat-quickstart).

0 commit comments

Comments
 (0)