About 24,700,000 results
Open links in new tab
  1. Question about privacy on local models running on LM Studio

    Nov 5, 2023 · Question about privacy on local models running on LM Studio Question | Help It appears that running the local models on personal computers is fully private and they cannot …

  2. LLM Web-UI recommendations : r/LocalLLaMA - Reddit

    Extensions with LM studio are nonexistent as it’s so new and lacks the capabilities. Lollms-webui might be another option. Or plug one of the others that accepts chatgpt and use LM Studios …

  3. What is considered the best uncensored LLM right now?

    WizardLM is really old by now. Have you tried any of the Mistral finetunes? Don't discount it just because of the low parameter count. I was also running WizardLM-33b-4bit for the longest …

  4. Correct way to setup character cards in LM Studio? : r/LocalLLaMA …

    Oct 11, 2023 · Character cards are just pre-prompts. So use the pre-prompt/system-prompt setting and put your character info in there. LM studio doesn't have support for directly …

  5. Privacy? : r/LMStudio - Reddit

    Oct 20, 2023 · My only contribution to this is that LM Studio seems to work regardless of whether or not your internet is active. I don't see why it would have to connect to a server since the …

  6. How do you roleplay with your LLM? : r/LocalLLaMA - Reddit

    Nov 11, 2023 · LM-Studio, on the other hand, is as close as it gets to a local ChatGPT at the moment, I think. It's not really about offering one particular experience or another, but it listens …

  7. Re-use already downloaded models? : r/LMStudio - Reddit

    Jan 4, 2024 · trueIn the course of testing many AI tools I have downloaded already lots of models and saved them to a dedicated location on my computer. I would like to re-use them instead of …

  8. You Should Know: If you can run Stable Diffusion locally, you can ...

    I use LM-studio, heard something is being made to counter it which would be open source, will try it in few days. But LM Studio works great, especially I found a few Plugins people made for …

  9. Why ollama faster than LMStudio? : r/LocalLLaMA - Reddit

    Apr 11, 2024 · There's definitely something wrong with LM Studio. I've tested it against Ollama using OpenWebUI using the same models. It's dogshit slow compared to Ollama. It's closed …

  10. Failed to load model Running LMStudio ? : r/LocalLLaMA - Reddit

    Dec 3, 2023 · Personally for me helped to update Visual Studio. I.e. exactly what Arkonias told below Your C++ redists are out of date and need updating.