This tutorial explains how to install different model services on Module LLM, LLM630 Compute Kit, and AI Pyramid, and how to access and invoke their model capabilities from a PC or other devices via the standard OpenAI API. Covered use cases include chat generation, speech recognition, and text-to-speech, helping you build efficient AI edge application systems.
Before use, please refer to the software update tutorial for the corresponding device to complete the addition of M5Stack apt software source information and index updates.
apt update apt install lib-llm apt install llm-sys llm-llm llm-vlm llm-whisper llm-melotts llm-openai-api reboot Configure the Python environment on your PC and install the official OpenAI Python library via pip. Subsequent examples will be based on this library to access the LLM device and invoke its model capabilities using the standard OpenAI API.
pip install openai