Skip to content

haopeng-wu/llm-vscode-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Before running

Create .env-openai.yml and populate your openai credentials into .env-openai.yml

OPENAI_API_KEY: xxx
OPENAI_API_BASE: xxx
OPENAI_API_VERSION: xxx
DEPLOYMENT: xxx

To run

cd llm-vscode-server
  1. Build the image.
docker build  -t llm-server:0.1 .
  1. Run the container
# for linux/mac
docker container run -p 8001:8000 -v "$(pwd)":/app llm-server:0.1

# for windows
docker container run -p 8001:8000 -v $PWD/:/app llm-server:0.1
  1. Test it
curl http://127.0.0.1:8001/health
curl http://127.0.0.1:8001/generate -d '{"inputs":"import spacy"}' -H "Content-Type: application/json"
  1. Use with the extension
    1. install the vscode extension, llm-vscode, and open the settings for it
    2. change “Config Template” to custom
    3. change “Model ID Or Endpoint” to http://localhost:8001/ or http://0.0.0.0:8001/, whichever works for you.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors