Skip to content

Commit 5bfd107

Browse files
authored
Update token length
1 parent d9054c6 commit 5bfd107

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

‎lightrag/llm.py‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -268,7 +268,7 @@ async def hf_model_if_cache(
268268
).to("cuda")
269269
inputs = {k: v.to(hf_model.device) for k, v in input_ids.items()}
270270
output = hf_model.generate(
271-
**input_ids, max_new_tokens=200, num_return_sequences=1, early_stopping=True
271+
**input_ids, max_new_tokens=512, num_return_sequences=1, early_stopping=True
272272
)
273273
response_text = hf_tokenizer.decode(output[0][len(inputs["input_ids"][0]):], skip_special_tokens=True)
274274
if hashing_kv is not None:

0 commit comments

Comments
 (0)