Providers
Last updated
Was this helpful?
Last updated
Was this helpful?
LLM Vision combines multiple AI providers into an easy to use integration. These AI providers are supported:
If you are unsure which model to use, there is a comparison available here: Choosing the right model
Easy to set up and blazingly fast
Achieve maximum privacy by hosting LLMs on a local machine
These providers can not handle AI requests but provide additional context or settings
Each provider is slightly different but most will require and API key. Self-hosted providers need a base url and port.
Providers can also be reconfigured if you need to change anything later!
To setup your first provider:
Navigate to Devices & Services in the Home Assistant settings
Add Integration
Search for 'LLM Vision'
Select the provider you want to set up from the dropdown
Enter all required details
On success you will see LLM Vision in your integrations. From here you can set up new providers or delete existing ones.
You can have multiple different configurations per provider. This is especially useful for local providers, for example when you have two machines hosting different models.
When running an action, you can select one of your provider configurations:
Supported Models
Supported Models
Supported Models
Supported Models
Supported Models
Supported Models
Timeline
Stores events to build a timeline of events. Exposes the calendar entity needed for the Timeline Card.
Memory
Stores reference images along with descriptions to provide additional context to the model. Syncs across all providers.
Supported Models