In this tutorial, we are going to build a simple neural networks model using Keras. Keras is a high-level neural networks API. It is written in Python, and runs on top of TensorFlow. TensorFlow itself is an open source machine learning library developed by Google.
The main purpose of this tutorial is to make us familiar with the Keras, so we are going to start off with a simple neural networks model, which is implementing a NAND gate, although neural networks is not needed for implementing a NAND gate, i.e. we can implement a NAND gate without neural networks. This implementation is for learning purpose only.
A NAND gate is a logic gates in digital electronics which produces an output which is false only if all its inputs are true. It is a universal gate because it can implement any Boolean function without need to use any other gate type. The truth table of NAND gate is shown in the following table.
The two inputs NAND has two inputs and one output. In digital IC design, the NAND gate is implemented using N-MOS and P-MOS transistors. Just for illustration, the internal circuit of the NAND gate is shown in the following figure.
As you can see in the figure above, the NAND gate is implemented using two N-MOS and two P-MOS transistor, so it is clear that neural networks is not needed, but just for demonstration, we are going to change the circuit of the NAND gate (the N-MOS and P-MOS transistors circuit) with a neural networks model.
The following figure shows the illustration of neural networks based NAND gate. The N-MOS and P-MOS transistor circuit is replaced by a neural networks model.
Neural Networks Model
We can implement this neural networks model using Keras. The code for this model is shown in the following listing. In line 6 and 7, we define the NAND gate dataset. The dataset is the truth table of the NAND gate. We can change this dataset to the other truth tables, if we want to implement the other logic gates.
In line 10, we create a sequential model, and then add three layers: two hidden layers (line 12 and 13) and one output layer (line 14). The
add() method is used for adding a layer to the model. The
Dense() class is used for creating the layer.
The first hidden layer is connected to the input. It has two input neurons (A and B), hence we set the input_dim parameter to 2. The output_dim is set to 3, so the first hidden layer has 3 neurons. We set the output_dim of the second hidden layer to 3, so the second hidden layer also has 3 neurons. The output layer has only 1 neuron (output_dim=1), that corresponds to the output C. We use ReLU activation function for hidden layers and sigmoid activation function for output layer.
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
# The input and output, i.e. truth table, of a NAND gate
x_train = np.array([[0,0],[0,1],[1,0],[1,1]], "uint8")
y_train = np.array([,,,], "uint8")
# Create neural networks model
model = Sequential()
# Add layers to the model
model.add(Dense(output_dim=3, activation='relu', input_dim=2)) # first hidden layer
model.add(Dense(output_dim=3, activation='relu')) # second hidden layer
model.add(Dense(output_dim=1, activation='sigmoid')) # output layer
# Compile the neural networks model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the neural networks model
model.fit(x_train, y_train, nb_epoch=5000)
# Test the output of the trained neural networks based NAND gate
y_predict = model.predict(x_train)
# save model as h5 file
Compiling the Model
In line 17, we call
compile() method. We pass three arguments into this function, which are optimizer, loss function, and metrics. Optimizer is the algorithm for finding the best weight (training). There are several optimizer such as Gradient Descent, Stochastic Gradient Descent, or Adam. We select Adam optimizer because it is a very efficient algorithm.
The loss function is used for calculating the error in the optimizer. We use
binary_crossentropy loss function if the output has 2 category, in this case the output (C) is binary (0 or 1). The third argument of this method is the metric that we use to optimize the model, in this case we use accuracy. So, every iteration of the optimizer, the accuracy will be increased.
Training the Model
In line 19, we train the model by using
fit() method that will fit the model into the dataset. The first and second arguments are the dataset
y_train. The third argument is the number of epoch, which is 5000. After 5000 epoch the accuracy is 1 as shown in the following figure.
In line 22, we can make a prediction with the trained model by calling
predict() method. Then we can print the result by using
In the figure above, we can see that the prediction output is very close to the truth table of the NAND gate. To get the binary output, we can execute this code:
np.round(y_predict). The code simply rounds the
y_predict to 0 or 1. Finally, in line 26, we save the trained model as .h5 model. This model can be read by the STM32Cube.AI for the deployment on the STM32 microcontroller.
Sometimes, when you train the model, it can’t get the accuracy of 1 as shown in the following figure. This condition happens because weights of the model is randomly initialized and the optimizer reaches local minima or regions. As a result, the loss is still high, the accuracy is still not 1, and the prediction results are wrong. To solve this problem, we can simply re-run the code with a different starting point (random initial weights) to allow the optimizer to find a different, and hopefully better, result.