Skip to content

Commit 6a29b5d

Browse files
committed
Update Docker deployment comments for LLM and embedding hosts
1 parent fdf0fe0 commit 6a29b5d

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

‎env.example‎

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -208,6 +208,7 @@ OPENAI_LLM_MAX_COMPLETION_TOKENS=9000
208208
# OPENAI_LLM_EXTRA_BODY='{"chat_template_kwargs": {"enable_thinking": false}}'
209209

210210
### use the following command to see all support options for Ollama LLM
211+
### If LightRAG deployed in Docker uses host.docker.internal instead of localhost in LLM_BINDING_HOST
211212
### lightrag-server --llm-binding ollama --help
212213
### Ollama Server Specific Parameters
213214
### OLLAMA_LLM_NUM_CTX must be provided, and should at least larger than MAX_TOTAL_TOKENS + 2000
@@ -229,7 +230,7 @@ EMBEDDING_BINDING=ollama
229230
EMBEDDING_MODEL=bge-m3:latest
230231
EMBEDDING_DIM=1024
231232
EMBEDDING_BINDING_API_KEY=your_api_key
232-
# If the embedding service is deployed within the same Docker stack, use host.docker.internal instead of localhost
233+
# If LightRAG deployed in Docker uses host.docker.internal instead of localhost
233234
EMBEDDING_BINDING_HOST=http://localhost:11434
234235

235236
### OpenAI compatible (VoyageAI embedding openai compatible)

0 commit comments

Comments
 (0)