Feature Request
Currently, Langflow's unified model providers only support OpenAI, Google, IBM WatsonX, and Ollama as first-class native options. While the LiteLLM bundle exists for proxying to multiple LLMs, there is no direct way to configure generic OpenAI-compatible endpoints (such as OpenRouter, Together.ai, Groq, Perplexity, or self-hosted vLLM/TGI instances) as global model providers. Users must manually configure these in individual components or use workarounds.
Motivation
With the proliferation of OpenAI-compatible API endpoints from various providers, users increasingly need a standardized way to connect to these services.
Supported Providers (Examples)
The following services use the OpenAI-compatible API format and would benefit from this feature:
| Provider |
Base URL Example |
| OpenRouter |
https://openrouter.ai/api/v1 |
| Together AI |
https://api.together.xyz/v1 |
| Groq |
https://api.groq.com/openai/v1 |
| Fireworks AI |
https://api.fireworks.ai/inference/v1 |
| Perplexity |
https://api.perplexity.ai |
| Anyscale |
https://api.endpoints.anyscale.com/v1 |
Benefits
-
Reduced configuration overhead - Configure once in the Models pane, reuse across all flows (aligns with Langflow 1.8's global provider philosophy)
-
Flexibility - Support any OpenAI-compatible endpoint without waiting for native provider support from the Langflow team
-
Self-hosted model support - Easily connect to internally-hosted models using vLLM, TGI, or similar inference engines
-
Cost optimization - Switch between providers offering the same models at different price points
-
Fallback/redundancy - Quickly switch to alternative providers during outages
-
Privacy/Compliance - Connect to on-premises or region-specific deployments for data sovereignty
Your Contribution
No response
Feature Request
Currently, Langflow's unified model providers only support OpenAI, Google, IBM WatsonX, and Ollama as first-class native options. While the LiteLLM bundle exists for proxying to multiple LLMs, there is no direct way to configure generic OpenAI-compatible endpoints (such as OpenRouter, Together.ai, Groq, Perplexity, or self-hosted vLLM/TGI instances) as global model providers. Users must manually configure these in individual components or use workarounds.
Motivation
With the proliferation of OpenAI-compatible API endpoints from various providers, users increasingly need a standardized way to connect to these services.
Supported Providers (Examples)
The following services use the OpenAI-compatible API format and would benefit from this feature:
https://openrouter.ai/api/v1https://api.together.xyz/v1https://api.groq.com/openai/v1https://api.fireworks.ai/inference/v1https://api.perplexity.aihttps://api.endpoints.anyscale.com/v1Benefits
Reduced configuration overhead - Configure once in the Models pane, reuse across all flows (aligns with Langflow 1.8's global provider philosophy)
Flexibility - Support any OpenAI-compatible endpoint without waiting for native provider support from the Langflow team
Self-hosted model support - Easily connect to internally-hosted models using vLLM, TGI, or similar inference engines
Cost optimization - Switch between providers offering the same models at different price points
Fallback/redundancy - Quickly switch to alternative providers during outages
Privacy/Compliance - Connect to on-premises or region-specific deployments for data sovereignty
Your Contribution
No response