-
Notifications
You must be signed in to change notification settings - Fork 1.9k
feat: allow to update llm urls #1620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 issues found across 11 files
Prompt for AI agents (all 3 issues)
Understand the root cause of the following 3 issues and fix them.
<file name="src/ipc/utils/lm_studio_utils.ts">
<violation number="1" location="src/ipc/utils/lm_studio_utils.ts:8">
If the user enters an LM Studio URL with http/https but no port (for example `http://localhost`), this branch returns it unchanged so the default 1234 port is never added. The resulting base URL points to port 80, causing LM Studio requests to fail. Please ensure the default port is appended even when a protocol is present but no explicit port was provided.</violation>
</file>
<file name="src/__tests__/normalizeLmStudioBaseUrl.test.ts">
<violation number="1" location="src/__tests__/normalizeLmStudioBaseUrl.test.ts:43">
This test overrides LM_STUDIO_BASE_URL_FOR_TESTING but does not restore the previous value, so any existing setting is lost for later tests. Please capture the original value and restore it in a finally block or after the assertion to keep tests isolated.</violation>
</file>
<file name="src/components/LocalModelEndpointSettings.tsx">
<violation number="1" location="src/components/LocalModelEndpointSettings.tsx:99">
Using a single shared saving flag, this finally block resets it to null even when another save request is still running. If the other endpoint is saved before this call completes, its button becomes re-enabled and can resubmit while the request is still in flight. Guard the reset so only the request that set the flag clears it.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the pull request.
A couple of high-level thoughts:
- Given that dyad is currently using the
OLLAMA_HOSTenv var for ollama, is there a need to configure the ollama URL inside dyad? I'm not an expert in ollama, but it seems not super necessary to configure the ollama URL assuming we're picking up the env var correctly. For LM Studio, I could see this being useful as I'm not aware of any similar env var for it. - UX-wise: I think we should configure this in the Model Providers section and not in the AI section. Similar to how each of the cloud providers have their own dedicated page, we should display the local model providers in this grid and then allow users to navigate to the provider-specific settings page and configure the URL:

Description
This pull request introduces an interface to allow the users to update their LLM URLs inside Dyad.
Closes #816
Summary by cubic
Add settings to edit local LLM endpoints (Ollama and LM Studio) and route all local model traffic through these values. Users can now point Dyad to remote or custom hosts while keeping sensible defaults.