Usage
Learn how to use LLM Vision
Image Analyzer
Example Service Call
Once you have set up at least one provider and have at least one camera in Home Assistant, you can test LLM Vision:
More examples & inspiration: https://llm-vision.gitbook.io/examples/
Full Service Call Reference
For all available models, check: https://llm-vision.gitbook.io/getting-started/choosing-the-right-model
Video Analyzer
Similarly, you can also analyze video files by calling the llmvision.video_analyzer
service with the following data:
Service Call Parameters
Parameter | Required | Description | Default | Valid Values |
---|---|---|---|---|
| Yes | The AI provider call. |
|
|
| No | Model used for processing the image(s). | See 'Choosing the right model' | |
| Yes | The prompt to send along with the image(s). | String | |
| No* | The path to the image file(s). Each path must be on a new line. | Valid path to an image file | |
| No* | An alternative to | any | |
| No* | The path to the video file(s). Each path must be on a new line. | Valid path to an video file | |
| No* | Event ID from Frigate. Each id must be on a new line. | e.g. | |
| Yes | Analyze frame every 'interval' seconds | 3 | Integer between 1 and 100 file |
| No | Whether to include the filename in the request. |
|
|
| No | Width to downscale the image to before encoding. | 1280 | Integer between 512 and 3840 |
| No | Level of detail to use for image understanding. |
|
|
| Yes | The maximum number of response tokens to generate. | 100 | Integer between 10 and 1000 |
| Yes | Randomness of the output. | 0.5 | Float between 0.0 and 1.0 |
Last updated