You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you want to use Ollama models, you only need to set LightRAG as follows:
160
-
160
+
161
161
```python
162
162
from lightrag.llm import ollama_model_complete, ollama_embedding
163
163
@@ -171,7 +171,7 @@ rag = LightRAG(
171
171
embedding_dim=768,
172
172
max_token_size=8192,
173
173
func=lambdatexts: ollama_embedding(
174
-
texts,
174
+
texts,
175
175
embed_model="nomic-embed-text"
176
176
)
177
177
),
@@ -196,14 +196,14 @@ with open("./newText.txt") as f:
196
196
```
197
197
## Evaluation
198
198
### Dataset
199
-
The dataset used in LightRAG can be download from [TommyChien/UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain).
199
+
The dataset used in LightRAG can be downloaded from [TommyChien/UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain).
200
200
201
201
### Generate Query
202
-
LightRAG uses the following prompt to generate high-level queries, with the corresponding code located in `example/generate_query.py`.
202
+
LightRAG uses the following prompt to generate high-level queries, with the corresponding code in `example/generate_query.py`.
203
203
204
204
<details>
205
205
<summary> Prompt </summary>
206
-
206
+
207
207
```python
208
208
Given the following description of a dataset:
209
209
@@ -228,18 +228,18 @@ Output the results in the following structure:
228
228
...
229
229
```
230
230
</details>
231
-
231
+
232
232
### Batch Eval
233
233
To evaluate the performance of two RAG systems on high-level queries, LightRAG uses the following prompt, with the specific code available in `example/batch_eval.py`.
234
234
235
235
<details>
236
236
<summary> Prompt </summary>
237
-
237
+
238
238
```python
239
239
---Role---
240
240
You are an expert tasked with evaluating two answers to the same question based on three criteria: **Comprehensiveness**, **Diversity**, and**Empowerment**.
241
241
---Goal---
242
-
You will evaluate two answers to the same question based on three criteria: **Comprehensiveness**, **Diversity**, and**Empowerment**.
242
+
You will evaluate two answers to the same question based on three criteria: **Comprehensiveness**, **Diversity**, and**Empowerment**.
243
243
244
244
-**Comprehensiveness**: How much detail does the answer provide to cover all aspects and details of the question?
245
245
-**Diversity**: How varied and rich is the answer in providing different perspectives and insights on the question?
@@ -303,15 +303,15 @@ Output your evaluation in the following JSON format:
We extract tokens from both the first half and the second half of each context in the dataset, then combine them as the dataset description to generate queries.
396
+
We extract tokens from the first and the second half of each context in the dataset, then combine them as dataset descriptions to generate queries.
You are an expert tasked with evaluating two answers to the same question based on three criteria: **Comprehensiveness**, **Diversity**, and **Empowerment**.
30
29
"""
31
30
32
31
prompt=f"""
33
-
You will evaluate two answers to the same question based on three criteria: **Comprehensiveness**, **Diversity**, and **Empowerment**.
32
+
You will evaluate two answers to the same question based on three criteria: **Comprehensiveness**, **Diversity**, and **Empowerment**.
34
33
35
34
- **Comprehensiveness**: How much detail does the answer provide to cover all aspects and details of the question?
36
35
- **Diversity**: How varied and rich is the answer in providing different perspectives and insights on the question?
0 commit comments