The vision tool captures a snapshot from the user’s camera feed and adds that visual context to the LLM context during a live conversation. Use it when your assistant needs access to the user’s live video stream.Documentation Index
Fetch the complete documentation index at: https://docs.akapulu.com/llms.txt
Use this file to discover all available pages before exploring further.
Create a vision tool
Attach via JSON
Add a function withtype: "vision" and define a clear name and description.
The function name and description are shown to the LLM as tool metadata, so they should clearly describe when the tool should be called.
Attach via UI
- Open your scenario and go to the target node.
- Click Add function.
- In the modal, select the Vision Tool tab.
- Enter a semantically meaningful function name and description.

