Create .env-openai.yml and populate your openai credentials into .env-openai.yml
OPENAI_API_KEY: xxx
OPENAI_API_BASE: xxx
OPENAI_API_VERSION: xxx
DEPLOYMENT: xxxcd llm-vscode-server- Build the image.
docker build -t llm-server:0.1 .- Run the container
# for linux/mac
docker container run -p 8001:8000 -v "$(pwd)":/app llm-server:0.1
# for windows
docker container run -p 8001:8000 -v $PWD/:/app llm-server:0.1- Test it
curl http://127.0.0.1:8001/healthcurl http://127.0.0.1:8001/generate -d '{"inputs":"import spacy"}' -H "Content-Type: application/json"- Use with the extension
- install the vscode extension, llm-vscode, and open the settings for it
- change “Config Template” to
custom - change “Model ID Or Endpoint” to
http://localhost:8001/orhttp://0.0.0.0:8001/, whichever works for you.