Replies: 6 comments
-
|
Beta Was this translation helpful? Give feedback.
-
|
It's a standard. |
Beta Was this translation helpful? Give feedback.
-
|
We were interested in Llamafile due to the improvements it offered with CPU only inferencing. It's still not that easy to find GPUs and you'd have to deal with various licensing issues with a well known GPU provider. As Llamafile upstreamed its improvements to Llama.cpp, we started using Llama.cpp instead as activity had died down here. Llamafile is still much easier to deploy and use and we're happy to use and contribute what we can here. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
I like it for use cases like games where I want to use LLMs. It allows me to distribute without needing to know hardly anything about the environment where it is being deployed. |
Beta Was this translation helpful? Give feedback.
-
|
Please make program or hacking echo for assistent.
all offline, all on my local computer/device (like a mycroft). all in my language |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Beta Was this translation helpful? Give feedback.
All reactions