Skip to content

Commit 70baa4b

Browse files
committed
update README.md
1 parent ea126a7 commit 70baa4b

File tree

1 file changed

+37
-6
lines changed

1 file changed

+37
-6
lines changed

‎README.md‎

Lines changed: 37 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@ This repository hosts the code of LightRAG. The structure of this code is based
2020
![请添加图片描述](https://i-blog.csdnimg.cn/direct/b2aaf634151b4706892693ffb43d9093.png)
2121
</div>
2222

23+
## 🎉 News
24+
- [x] [2024.10.15]🎯🎯📢📢LightRAG now supports Hugging Face models!
25+
2326
## Install
2427

2528
* Install from source
@@ -35,17 +38,27 @@ pip install lightrag-hku
3538

3639
## Quick Start
3740

38-
* Set OpenAI API key in environment: `export OPENAI_API_KEY="sk-...".`
39-
* Download the demo text "A Christmas Carol by Charles Dickens"
41+
* Set OpenAI API key in environment if using OpenAI models: `export OPENAI_API_KEY="sk-...".`
42+
* Download the demo text "A Christmas Carol by Charles Dickens":
4043
```bash
4144
curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txt
4245
```
43-
Use the below python snippet:
46+
Use the below Python snippet to initialize LightRAG and perform queries:
4447

4548
```python
4649
from lightrag import LightRAG, QueryParam
50+
from lightrag.llm import gpt_4o_mini_complete, gpt_4o_complete
4751

48-
rag = LightRAG(working_dir="./dickens")
52+
WORKING_DIR = "./dickens"
53+
54+
if not os.path.exists(WORKING_DIR):
55+
os.mkdir(WORKING_DIR)
56+
57+
rag = LightRAG(
58+
working_dir=WORKING_DIR,
59+
llm_model_func=gpt_4o_mini_complete # Use gpt_4o_mini_complete LLM model
60+
# llm_model_func=gpt_4o_complete # Optionally, use a stronger model
61+
)
4962

5063
with open("./book.txt") as f:
5164
rag.insert(f.read())
@@ -62,13 +75,31 @@ print(rag.query("What are the top themes in this story?", param=QueryParam(mode=
6275
# Perform hybrid search
6376
print(rag.query("What are the top themes in this story?", param=QueryParam(mode="hybrid")))
6477
```
65-
Batch Insert
78+
### Using Hugging Face Models
79+
If you want to use Hugging Face models, you only need to set LightRAG as follows:
80+
```python
81+
from lightrag.llm import hf_model_complete, hf_embedding
82+
from transformers import AutoModel, AutoTokenizer
83+
84+
# Initialize LightRAG with Hugging Face model
85+
rag = LightRAG(
86+
working_dir=WORKING_DIR,
87+
llm_model_func=hf_model_complete, # Use Hugging Face complete model for text generation
88+
llm_model_name='meta-llama/Llama-3.1-8B-Instruct', # Model name from Hugging Face
89+
embedding_func=hf_embedding, # Use Hugging Face embedding function
90+
tokenizer=AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2"),
91+
embed_model=AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
92+
)
93+
```
94+
### Batch Insert
6695
```python
96+
# Batch Insert: Insert multiple texts at once
6797
rag.insert(["TEXT1", "TEXT2",...])
6898
```
69-
Incremental Insert
99+
### Incremental Insert
70100

71101
```python
102+
# Incremental Insert: Insert new documents into an existing LightRAG instance
72103
rag = LightRAG(working_dir="./dickens")
73104

74105
with open("./newText.txt") as f:

0 commit comments

Comments
 (0)