Couch "Spud" Berner

  • 0 Posts
  • 2 Comments
Joined 11 months ago
cake
Cake day: August 6th, 2023

help-circle
  • Oh wait, I think I misunderstood. I thought you had local language models running on your computer. I have seen that be discussed before with varying results.

    Last time I tried running my own model was in the early days of the Llama release and ran it on an RTX 3060. The speed of delivery was much slower than OpenAI’s API and the material was way off.

    It doesn’t have to be perfect, but I’d like to do my own API calls from a remote device phoning home instead of OpenAI’s servers. Using my own documents as a reference would be a plus to, just to keep my info private and still accessible by the LLM.

    Didn’t know about Elevenlabs. Checking them out soon.

    Edit because writing is hard.