V-Training

Setup

V-Training platform

Through M5Stack's V-Training (Ai model training service), easily build a custom recognition model. Use a mobile phone or other camera equipment to take pictures and save them to your computer. Use a browser to access the V-Training online training platform and register for a login account (M5 forum user account can log in directly)

Import pictures

Click Start->New Project->Import Image->NEXT->Object Detection. Note: The quality and quantity of the training set will directly affect the quality of the trained model. Therefore, when shooting or collecting the training set, please provide high-quality training materials as much as possible. The more the number, the better, and the material shooting scene should fit the actual recognized scene.

Material processing

Create label

Before the object frame selection, we need to create a label name for the recognized object. In the subsequent image tagging operation, we need to select the corresponding label for the frame selection according to different objects. (You can click the + sign on the left side of the pop-up window to create multiple labels). You can also import labels in batches in the form of text files (Load Labels from file), the file format is txt, and each line of the file content is a label name. (Shown below)


//Labels.txt

Dog
Cat
Bird

Tag image

After creating the label, the next step is to mark the image. We need to frame out the objects that need to be identified in the training set material. On the left side of the page is a list of images that need to be processed, and you can know which images have been processed according to the subscripts.

Manually mark

Click the arrow in the bottom bar to switch images (or press the keyboard <-left->right button to switch between pictures). The menu bar on the right is the label list. After the frame selection, you can select the object according to the frame Specify the label of the response.

AI automatic marking

When processing materials in batches, you can also try to use the AI auto-marking function to improve marking efficiency. Click Load AI Model in the lower left corner -> check COCO SSD-object detection using rectangles->Use model!. After waiting for the frame selection model to load, the page color will change to green. At this time, you can check the label name created in the previous step and add it to the list for subsequent marking, and click Accept.

AI will automatically mark the objects that can be identified in the picture and make a frame selection. The next thing you need to do is to review the frame selection of each picture. When AI recognizes an object in a certain category, the class selection box will pop up. You can check the AI-recommended tag category to add to the tag list, or directly click Accept to proceed to the next step and use the tags in the tag list. There are labels for classification and marking.

If you identify the correct box, click the + on the box to add it. (Or press Enter to confirm). If the wrong selection is recognized, you can also click the delete button on the selection box to remove it and manually select it. After AI completes the frame selection, you can also specify the frame as another label in the label column on the right.

Model training

After completing the box selection, click Next to upload the material (if there is unmarked material, it will be prompted on this page), click UPLOAD to upload the material (currently supported efficient and high-efficiency training mode), and you will jump to the task list . Click Refresh to refresh and view the latest status of the task. After the training is completed, you will get the download link of the model and the loss curve.

Common precautions:

-If you encounter a training failure, you can click the detail button of the task to view the error details, get the cause of the error, and re-upload after correction.

Model preview

The online preview function of the model is still in the development stage. At present, users can experience the recognition effect by loading the model through the program.

Model training

After completing the box selection, click Next to upload the material (if there is unmarked material, it will be prompted on this page), click UPLOAD to upload the material (currently supported efficient training mode), and it will jump to the task list . Click Refresh to refresh and view the latest status of the task. After the training is completed, you will get the download link of the model and the loss curve.

Common precautions:

-If you encounter a training failure, you can click the detail button of the task to view the details of the error, get the cause of the error, and re-upload after correction.

Program loading model

Run the model

-Ethernet mode connection: UnitV2 has a built-in wired network card. When you connect to a PC through the TypeC interface, it will automatically establish a network connection with UnitV2.

-AP mode connection: After UnitV2 is started, the AP hotspot (SSID: M5UV2_XXX: PWD:12345678) will be turned on by default, and the user can establish a network connection with UnitV2 through WiFi access.

Connect to the UnitV2 device through one of the above two modes, access the domain name unitv2.py or IP: 10.254.239.1 to access the preview webpage through the recognition function. Switch the function to Object Recognition and click the add button to upload the model.Note: Please install the SR9900 driver before use. For detailed installation steps, please refer to the previous chapter Jupyter notebook

After uploading, click run to start using the model. During the identification process, UnitV2 will continuously output identification sample data (JSON format, UART: 115200bps 8N1) through the serial port (HY2.0-4P interface at the bottom)

Sample output

{
    "num": 1,
    "obj": [
        {
            "prob": 0.938137174,
            "x": 179,
            "y": 186,
            "w": 330,
            "h": 273,
            "type": "dog"
        }
    ],
    "running": "Object Recognition"
}
On This Page