Favorite Local LLMs/apps

While I’m not skilled in AI, I’ve been trying to grasp as much as possible over the last year. One concept being local offline models. My favorite tool of the year is Pinokio, a one-click installer for tons of things I would have no idea how to set up on my own.

Curious to hear about everyone’s favorite apps, models, interfaces, and projects that they’ve used.

I’ll start with some:

Oobabooga (AKA Text Generation Web UI) - running local chatbot. There’s others I would like to try but this does fine for me.

Automatic1111 - Stable Diffusion UI, simple enough for me to understand but powerful enough to make great images.

ComfyUI - I like the node system, moved back to A1111 for simplicity.

Bark TTS - Mostly playing around but from what I have done with it, it can do a great job. Audio/music isn’t quite there for AI in my opinion.

If anyone has advice for learning about AI or anything to add, I’d love to see more!


I’m using an Ollama docker to pull down models. There’s a GitHub and they’ve got a list of frameworks in the sidebar.


Tortoise TTS implemented through JahrodMica his ai-voice-cloning repo on github is also one that should be up on that list! Agree with pretty much all of them

1 Like
  1. For ComfyUI, there are a few things that I really like about it. It does a fantastic job of dealing with low VRAM situations, as I have a measly 4GB GPU. Additionally, it’s got the ability to be automated. I don’t use Automatic 1111 or EasyDiffusion any more, so I don’t know if they have this feature, but there is a Save As Script extension, which built upon the ComfyUI Python extension. It takes your workflow, exports it as a Python script, which you can tweak to your needs. You can assign variables as your CLIP text encode prompts, and then make it such that you can perform long running campaigns, using CSV files full of prompts (can’t use commas as prompts delimiters), or UDP/TCP to feed in prompts.
  2. I cross posted in the GPT-Pilot Pythagoras forum, but if you haven’t tried out Syndey QT, I recommend checking out that. You get the LM Studio experience but with Microsoft (Bing) Copilot as the LLM inference provider. It also offers up OpenAI compatible endpoints that you can use to make your own chatbot or to be part of an Autogen, AutoGPT, GPT-Pilot workflow.
  3. Gemini Pro 1.0 API is freaking underrated. You can call the API 60 times a minute for free, so once a second. Using a multi-threaded Python script, you can have this thing crank out image generation prompts like a mad scientist, some 6 prompts a thread…the prompts just flow onto your computer. Could use the same concept, and tie into topic 2, have Gemini ask Bing CoPilot what’s happening in the world in a round-robin fashion, and then make a periodic report for you…essentially a Researcher agent workflow