Self-hosted Providers
This page will help you set up any self-hosted providers with LLM Vision
LocalAI
To use LocalAI you need to have a LocalAI server running. You can find the installation instructions here. During setup you'll need to provide the IP address of your machine and port on which LocalAI is running (default is 8000
). If you want to use HTTPS
to send requests you need to check the HTTPS
box.
Ollama
To use Ollama you must set up an Ollama server first. You can download it from here. Once installed you need to download a model to use. A full list of all available models can be found here. Keep in mind that only models with vision capabilities can be used. The recommended model is llava-phi3
which is a LLaVA model fine-tuned from Microsoft's Phi 3 Mini 4k.
If your Home Assistant is not running on the same computer as Ollama, you need to set the OLLAMA_HOST
environment variable.
Last updated