-
Notifications
You must be signed in to change notification settings - Fork 27
Getting Started
Tinymind has been compiled to run both on PCs, running either Linux or Windows, as well as within embedded systems. To get familiar with the code, I would suggest getting the unit tests building and running successfully. To do this, you will need to download the Boost C++ Libraries and unzip them onto your development system. Once you've done that, you need to ensure that you configure your build system.
Here I will be taking a minimalist approach. I will be documenting how to build each unit test project on the command line under Linux. If you want something fancier like CMake, VSCode, Visual Studio, etc. then you should be able to use this simple example as a guide for how to set up your build system of choice.
For this post I was using g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0.
To build the unit tests, you'll need to add the cpp folder within the directory where you've cloned cppnnml to your include path. This folder contains all of the C++ template header files (.hpp) as well as the one .cpp file (lookupTables.cpp) that you will need if you're going to compile and run neural networks unit test. You'll also need to make sure to add the directory where you've placed the Boost C++ Libraries to your include path.
Depending upon the unit test you're building, you will need to ensure certain preprocessor symbols are defined.
The Q-Format unit test needs the preprocessor symbol, ENABLE_OSTREAMS, defined. This is due to the Boost C++ Libraries unit test library. It needs to be able to out expected and actual values during unit tests. Starting at the repo base directory, navigate to unit_test/qformat and issue the following command to compile the unit test:
mkdir -p ~/qformat
g++ -o ~/qformat/qformat_unit_test qformat_unit_test.cpp -DENABLE_OSTREAMS -I../../cpp -I~/code/boost_1_73_0
~/qformat/./qformat_unit_test
You can see that I have installed the Boost C++ libraries version 1.73.0 at ~/code. If you have the libraries installed elsewhere you will need to specify that path. If you have them installed under /usr/include/boost.
If everything is configured correctly, you should see:
Running 10 test cases...
*** No errors detected
The Q-Learning unit test also needs the preprocessor symbol, ENABLE_OSTREAMS, defined. Since the unit tests uses Q-Format types, we need to define the stream operator for the unit test builds. Starting at the repo base directory, navigate to unit_test/qlearn and issue the following command to compile the unit test:
mkdir -p ~/qlearn
g++ -o ~/qlearn/qlearn_unit_test qlearn_unit_test.cpp ../../cpp/lookupTables.cpp -DENABLE_OSTREAMS -I../../cpp -I../../apps/include -DTINYMIND_USE_TANH_16_16 -I~/code/boost_1_73_0
~/qlearn/./qlearn_unit_test
Again, you can see that I have installed the Boost C++ libraries version 1.73.0 at ~/code. If you have the libraries installed elsewhere you will need to specify that path. If you have them install under /usr/include/boost, you can leave this include path off of the command line altogether.
If everything is configured correctly, you should see:
Running 12 test cases...
*** No errors detected
The neural network unit test is a bit more complicated. We unit test every supported configuration of neural network provided by tinymind. This means that some of the unit tests will be testing neural networks using Q-Format values as the value type (tinymind::QValue). When tinymind::QValue is used as the value type for the neural network, we need to use lookup tables (LUTs) for neural network activation functions. To keep the code and data footprint small, we are required to tell the compiler which LUTs to compile. We don't want to compile what we're not going to use. So, we need to compile LUTs for every QValue<->Activation function pair the unit test instantiates.
The command line to compile the neural network unit test is a bit longer than either Q-Format or Q-Learing:
mkdir -p ~/nn
g++ -o ~/nn/nn_unit_test nn_unit_test.cpp ../../cpp/lookupTables.cpp -DENABLE_OSTREAMS -DTINYMIND_USE_SIGMOID_8_8 -DTINYMIND_USE_SIGMOID_16_16 -DTINYMIND_USE_LOG_16_16 -DTINYMIND_USE_TANH_8_8 -DTINYMIND_USE_TANH_8_24 -DTINYMIND_USE_TANH_16_16 -DTINYMIND_USE_EXP_16_16 -I../../cpp -I../../apps/include -I~/code/boost_1_73_0
~/nn/./nn_unit_test
Most of the preprocessor symbols in the command line are related to the generation of LUTs. Let's look at one of them:
-DTINYMIND_USE_SIGMOID_8_8
SIGMOID - The sigmoid activation function.
8 - The number of bits to represent the integer portion of the Q-Format value.
8 - The number of bits used to represent the fractional portion of the Q-Format value.
The tie into the code can be found here:
#if (defined(TINYMIND_USE_SIGMOID_8_8))
template<>
struct SigmoidTableValueSize<8, 8, true>
{
typedef SigmoidValuesTableQ8_8 SigmoidTableType;
};
#endif // (defined(TINYMIND_USE_SIGMOID_8_8))
We use the preprocessor to cull out any LUTs we don't need and only include the ones we're actually going to use. So, from the command line above, you can see which activation function<->Q-Format types are used by the unit test.
If everything is configured correctly, you should see this output after running the neural network unit tests:
Running 37 test cases...
*** No errors detected
If you cd into the directory from where we ran the unit tests you will see something like this:
nn_fixed_16_16_5_and.bin nn_fixed_8_24_xor_weights.txt nn_fixed_no_train_float_weights_and.txt
nn_fixed_16_16_5_and.txt nn_fixed_8_8_relu_xor_no_train.txt nn_fixed_no_train_float_weights_xor.txt
nn_fixed_16_16_5_and_weights.txt nn_fixed_and.bin nn_fixed_no_train_nor.txt
nn_fixed_16_16_5_or.bin nn_fixed_and.txt nn_fixed_no_train_or.txt
nn_fixed_16_16_5_or.txt nn_fixed_and_weights.txt nn_fixed_no_train_xor.txt
nn_fixed_16_16_5_or_weights.txt nn_fixed_batch_2_8_24_and.bin nn_fixed_or.bin
nn_fixed_16_16_5_xor.bin nn_fixed_batch_2_8_24_and.txt nn_fixed_or.txt
nn_fixed_16_16_5_xor.txt nn_fixed_batch_2_8_24_and_weights.txt nn_fixed_or_weights.txt
nn_fixed_16_16_5_xor_weights.txt nn_fixed_batch_2_8_24_or.bin nn_fixed_relu_xor_no_train.txt
nn_fixed_2_hidden_relu_xor_no_train.txt nn_fixed_batch_2_8_24_or.txt nn_fixed_sigmoid_xor.bin
nn_fixed_5_and.bin nn_fixed_batch_2_8_24_or_weights.txt nn_fixed_sigmoid_xor.txt
nn_fixed_5_and.txt nn_fixed_batch_2_8_24_xor.bin nn_fixed_sigmoid_xor_weights.txt
nn_fixed_5_and_weights.txt nn_fixed_batch_2_8_24_xor.txt nn_fixed_xor.bin
nn_fixed_5_or.bin nn_fixed_batch_2_8_24_xor_weights.txt nn_fixed_xor.txt
nn_fixed_5_or.txt nn_fixed_batch_4_8_24_and.bin nn_fixed_xor_weights.txt
nn_fixed_5_or_weights.txt nn_fixed_batch_4_8_24_and.txt nn_float_2_hidden_relu_xor.txt
nn_fixed_5_xor.bin nn_fixed_batch_4_8_24_and_weights.txt nn_float_2_hidden_relu_xor_weights.txt
nn_fixed_5_xor.txt nn_fixed_batch_4_8_24_or.bin nn_float_and.txt
nn_fixed_5_xor_weights.txt nn_fixed_batch_4_8_24_or.txt nn_float_and_weights.txt
nn_fixed_8_24_and.bin nn_fixed_batch_4_8_24_or_weights.txt nn_float_elman.txt
nn_fixed_8_24_and.txt nn_fixed_batch_4_8_24_xor.bin nn_float_or.txt
nn_fixed_8_24_and_weights.txt nn_fixed_batch_4_8_24_xor.txt nn_float_or_weights.txt
nn_fixed_8_24_or.bin nn_fixed_batch_4_8_24_xor_weights.txt nn_float_relu_xor.txt
nn_fixed_8_24_or.txt nn_fixed_elman.txt nn_float_relu_xor_weights.txt
nn_fixed_8_24_or_weights.txt nn_fixed_nor.bin nn_float_xor.txt
nn_fixed_8_24_relu_xor_no_train.txt nn_fixed_nor.txt nn_float_xor_weights.txt
nn_fixed_8_24_xor.bin nn_fixed_nor_weights.txt nn_unit_test
nn_fixed_8_24_xor.txt nn_fixed_no_train_and.txt
The neural network unit tests output the results of neural network training and testing so that you can view the results graphically. A python script is provided to plot these neural network unit test results files at cppnnml/unit_test/nn/nn_plot.py.
To plot the neural network results you will need either Python 2 or 3 as well as matplotlib. We can look at one plot as an example. The following plot was made by issuing:
python nn_plot.py ~/nn/nn_fixed_8_24_and.txt
As you can see, there is only 1 argument which needs to be provided to the python script, the data file path. The python script parses the output file, which contains the neural network's input and hidden layer neuron weights, expected output, actual output, as well as prediction error. Each weight has the following format:
[Layer][SourceNeuron][DestinationNeuron]Weight
or
[Layer]BiasWeight
So, Input00Weight refers to the connection weight between the 0th neuron in the input layer and the 0th neuron in the hidden layer.

In the figure above, you can see the neural network weights converge as the prediction error is dropping (bottom-right graph).