File tree Expand file tree Collapse file tree 1 file changed +2
-1
lines changed Expand file tree Collapse file tree 1 file changed +2
-1
lines changed Original file line number Diff line number Diff line change @@ -208,6 +208,7 @@ OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
208208# OPENAI_LLM_EXTRA_BODY='{"chat_template_kwargs": {"enable_thinking": false}}'
209209
210210### use the following command to see all support options for Ollama LLM
211+ ### If LightRAG deployed in Docker uses host.docker.internal instead of localhost in LLM_BINDING_HOST
211212### lightrag-server --llm-binding ollama --help
212213### Ollama Server Specific Parameters
213214### OLLAMA_LLM_NUM_CTX must be provided, and should at least larger than MAX_TOTAL_TOKENS + 2000
@@ -229,7 +230,7 @@ EMBEDDING_BINDING=ollama
229230EMBEDDING_MODEL=bge-m3:latest
230231EMBEDDING_DIM=1024
231232EMBEDDING_BINDING_API_KEY=your_api_key
232- # If the embedding service is deployed within the same Docker stack, use host.docker.internal instead of localhost
233+ # If LightRAG deployed in Docker uses host.docker.internal instead of localhost
233234EMBEDDING_BINDING_HOST=http://localhost:11434
234235
235236### OpenAI compatible (VoyageAI embedding openai compatible)
You can’t perform that action at this time.
0 commit comments