Skip to content

Conversation

@Olyno
Copy link
Contributor

@Olyno Olyno commented Oct 23, 2025

Description

This pull request introduces an interface to allow the users to update their LLM URLs inside Dyad.

Closes #816

image

Summary by cubic

Add settings to edit local LLM endpoints (Ollama and LM Studio) and route all local model traffic through these values. Users can now point Dyad to remote or custom hosts while keeping sensible defaults.

  • New Features
    • Settings UI to view/update/reset Ollama and LM Studio endpoints.
    • Persist endpoints in UserSettings with localhost defaults.
    • LM Studio URL normalization: trim, add protocol, default port 1234, strip trailing /v1, IPv6-safe.
    • Ollama host parsing: trim, add protocol, default port 11434, IPv6-safe; use OLLAMA_HOST env if set, else settings.
    • Handlers and clients read settings for API base URLs (Ollama tags, LM Studio models, OpenAI-compatible provider).
    • Added defaults as constants; tests for URL parsing/normalization; clearer connection errors.
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 11 files

Prompt for AI agents (all 3 issues)

Understand the root cause of the following 3 issues and fix them.


<file name="src/ipc/utils/lm_studio_utils.ts">

<violation number="1" location="src/ipc/utils/lm_studio_utils.ts:8">
If the user enters an LM Studio URL with http/https but no port (for example `http://localhost`), this branch returns it unchanged so the default 1234 port is never added. The resulting base URL points to port 80, causing LM Studio requests to fail. Please ensure the default port is appended even when a protocol is present but no explicit port was provided.</violation>
</file>

<file name="src/__tests__/normalizeLmStudioBaseUrl.test.ts">

<violation number="1" location="src/__tests__/normalizeLmStudioBaseUrl.test.ts:43">
This test overrides LM_STUDIO_BASE_URL_FOR_TESTING but does not restore the previous value, so any existing setting is lost for later tests. Please capture the original value and restore it in a finally block or after the assertion to keep tests isolated.</violation>
</file>

<file name="src/components/LocalModelEndpointSettings.tsx">

<violation number="1" location="src/components/LocalModelEndpointSettings.tsx:99">
Using a single shared saving flag, this finally block resets it to null even when another save request is still running. If the other endpoint is saved before this call completes, its button becomes re-enabled and can resubmit while the request is still in flight. Guard the reset so only the request that set the flag clears it.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

Copy link
Contributor

@wwwillchen wwwillchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pull request.

A couple of high-level thoughts:

  • Given that dyad is currently using the OLLAMA_HOST env var for ollama, is there a need to configure the ollama URL inside dyad? I'm not an expert in ollama, but it seems not super necessary to configure the ollama URL assuming we're picking up the env var correctly. For LM Studio, I could see this being useful as I'm not aware of any similar env var for it.
  • UX-wise: I think we should configure this in the Model Providers section and not in the AI section. Similar to how each of the cloud providers have their own dedicated page, we should display the local model providers in this grid and then allow users to navigate to the provider-specific settings page and configure the URL:
image
@elsung
Copy link

elsung commented Oct 30, 2025

hm not sure if there are other folks like me who use ollama + tailscale, but for me theres a definite need so that i can use dyad on my laptop when im out and about to connect to my remote ollama instance

Thanks for the pull request.

A couple of high-level thoughts:

  • Given that dyad is currently using the OLLAMA_HOST env var for ollama, is there a need to configure the ollama URL inside dyad? I'm not an expert in ollama, but it seems not super necessary to configure the ollama URL assuming we're picking up the env var correctly. For LM Studio, I could see this being useful as I'm not aware of any similar env var for it.
  • UX-wise: I think we should configure this in the Model Providers section and not in the AI section. Similar to how each of the cloud providers have their own dedicated page, we should display the local model providers in this grid and then allow users to navigate to the provider-specific settings page and configure the URL:
image
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants