

I don’t fucking understand why Lemmy is permanently stuck in 2023 with AI
Using RAG (retrieval augmented generation) results in much lower, almost negligible confabulation rates
I don’t fucking understand why Lemmy is permanently stuck in 2023 with AI
Using RAG (retrieval augmented generation) results in much lower, almost negligible confabulation rates
download “LM Studio” and you can download models and run them through it
I recommend something like an older Mistral model (FOSS model) for beginners, then move on to Mistral Small 24B, QwQ 32B and the likes
First, please answer, do you want everything FOSS or are you OK with a little bit of proprietary code because we can do both
Fuck ClosedAI
I want everyone here to download an inference engine (use llama.cpp) and get on open source and open data AI RIGHT NOW!
SCMP is one of the most libshit news in China
Also, chill it cracker, it’s news about a reactor, we don’t need your state dept. programming here
Oh fucking -please-
This place is genuinely more insufferable than Reddit. That is actually an achievement
Whatever dude, writhe in your own ignorance