Product Guide
This tutorial introduces how to install various model services on the Module LLM and LLM630 Compute Kit, and how to access and invoke their model capabilities via the standard OpenAI API on PC or other devices. It covers common scenarios including chat completion, speech recognition, speech synthesis, etc., helping you build efficient AI edge applications.
Before use, please refer to the respective device’s software upgrade tutorial to add the M5Stack apt repository info and update the index.
apt update
apt install lib-llm
apt install llm-sys llm-llm llm-vlm llm-whisper llm-melotts llm-openai-api
reboot
On the PC, configure your Python environment and install the official OpenAI Python library via pip. Subsequent steps will use this library and the standard OpenAI API interface to access the LLM device and invoke its model capabilities.
pip install openai