In this tutorial, we are going to load the Keras .h5 model on the STM32Cube.AI. Then, the STM32Cube.AI will generate the C code of the model. After that, we are going to validate the generated C-model by running it on the STM32 microcontroller.

The STM32Cube.AI has a validation engine that can be used for comparing the accuracy of the C-model and the uploaded model, in this case the Keras .h5 model. The validation process can be run on desktop or on target (STM32 microcontroller). When the validation is run on desktop, the validation engine compares the generated x86 C-model with the Keras model. When the validation is run on the target, the validation engine compares the generated STM32 C-model with the Keras model.

Prerequisites

In order to follow this tutorial, you should have these following software tools installed:

  • STM32CubeMx: a graphical tool that allows a very easy configuration of STM32.
    • STM32Cube MCU Packages: a library required to develop an application on STM32.
    • X-CUBE-AI: a library required to develop a neural networks application on STM32.
  • STM32CubeIDE: an integrated development environment from ST for editing and compiling the STM32CubeMx generated code.

New Project (Validation)

Run the STM32CubeMX. Go to File > New Project... and in the dialog that pops up, you can go to MCU/MPU Selector or Board Selector. In this tutorial, I use the NUCLEO-F446RE board, so I go to Board Selector, search for NUCLEO-F446RE, and click Start Project.

On the next dialog, click Yes to initialize all peripherals of NUCLEO-F446RE board with their default mode.

You should now see your project on main window.

In order to run the neural networks validation project on STM32, we should enable UART peripheral. The STM32 sends the validation result to the STM32CubeMX using this UART. Since I use the Board Selector and initialize the board peripherals to their default mode, the UART is already enabled. If you use other boards, the you may have to enable the UART manually (you can choose any available UART).

Validation on Desktop

On the main window, go to Additional Software. In the dialog that pops up, select X-CUBE-AI/Application and select Validation. After that, select X-CUBE-AI/X-CUBE-AI and enable the core and click Ok.

On main window, on Categories tab, click Additional Software > STMicroelectronics.X-CUBE-AI. Select Platform Settings tab and select USART: Asynchronous and select UART2.

Now we are ready to import our neural networks model!

Click Add network button or + tab to add a neural network model. Choose Keras and Saved model, and then Browse the nand.h5 model.

Scroll down and click Analyze button. The tool will analyze the model. The report shows the model complexity, number of layers, number of parameters, muptiply-accumulate (macc) operation, and memory size.

Click Validate on desktop button. The tool will build the x86 C-model and validate the model on desktop with random data. The report shows the accuracy and error (RMSE, MAE, l2r). If the the l2r error is below a certain threshold (expected to be < 0.01), the tool will report that there is no or very little numerical degradation due to C implementation.

Validation on STM32

Now that we have validate the C-model on desktop, we need to generate the code to be compiled and downloaded into the STM32. Go to Project Manager tab and give your project a name. For instance, stm32f446_hal_nand_application. After that, choose the Toolchain / IDE. In this tutorial, I use the STM32CubeIDE. Finally, click GENERATE CODE button.

After the code has been generated, a dialog will pop up, and then click Open Project button.

In the workspace chooser that pops up, I recommend you to choose the root folder of your project folder. In my case, my project folder is D:\Microelectronics\STM32\stm32f446_hal_nand_application\. So, my workspace folder is D:\Microelectronics\STM32\.

If you see a welcome tab, you can close it. Finally, you should see your project in the Project Explorer tab.

We can see how the project is organized:

  • Drivers: This folder contains of the ARM CMSIS library and STM32 HAL library.
  • Middlewares: This folder contains of  the X-CUBE-AI library and source code for validation application.
  • Src:
    • main.c: This is the main file of your application where you can find the main() function.
    • app_x-cube-ai.c: This is the main file for AI related user functions. The AI functions in this file will be called in main.c file.
    • network_data.c: This file contains the neural network weights.
    • network.c: This is the C implementation of our neural network model.
  • Startup: This folder contains the STM32 startup file.

With your project open, open the Run menu, and then click the Debug Configurations... option. In the dialog that pops up, click the Debug button.

In the Confirm Perspective Switch dialog that pops up, click the Switch button (and check Remember my desision if you want). This will open debug perspective, in which you can debug your application.

After that, you should see the debug perspective like this:

Finally, click the Resume button or press F8 to start the application.

Now we are ready to validate our model on STM32!

Go back to the STM32CubeMX and click Validate on target button. In the dialog that pops up, set the communication port to Manual, and then choose your STM32 board’s COM port, and click Ok.

Finally, you should see the validation report like this:

The report shows the l2r error which is also is below a certain threshold. In other words, there is no or very little numerical degradation due to C implementation.

Pruning Neural Networks

Pruning or compressing neural networks is an idea to reduce the number of parameters in the network. This is because among the many parameters in the network, some are redundant and don’t contribute a lot to the output. We can remove these neurons. As a result, we get a smaller and faster network.

Getting smaller and faster networks is very important for running these networks on memory limited devices.

The X-CUBE-AI tool already have compression feature. When you import the network, you can choose a compression factor of 4 or 8. The validation process may fail if you have a large neural network model and choose a compression factor of 8.

Summary

The use of X-CUBE-AI validation project is for validating your neural networks model. It tells you about the complexity of your model whether it fits into your device or not (due to memory limitation). It calculates the numerical degradation due to C implementation and compression. The X-CUBE-AI tool can compress your model so you get a smaller model that may fit into your device.

Next: Evaluate Model’s Performance on STM32

Leave a Reply

Close Menu