This project provides a C++ library for creating and training neural networks using a genetic algorithm. It includes a matrix library for underlying mathematical operations.
- Customizable Neural Network architecture (
NeuroNet,NeuroNetLayer). - Genetic Algorithm (
GeneticAlgorithm) for evolving network weights and biases. - Matrix library (
Matrix) for numerical computations. - Built with CMake.
- Unit tests using Google Test.
- C++ Compiler (supporting C++17 or later)
- CMake (version 3.10 or later)
-
Clone the repository:
git clone <repository_url> cd <repository_name>
-
Create a build directory:
mkdir build cd build -
Configure with CMake:
cmake ..
-
Build the project (library and tests):
make # Or, on Windows with MSVC # cmake --build .
This will build the neuronet static library and the test executables.
After building the project, you can run the tests from the build directory:
ctest
# Or, if 'make test' is available
# make testGoogle Test is used as the testing framework and is fetched automatically by CMake during the configuration step.
Using the NeuroNet Library Here's a minimal example of how to use the NeuroNet library:
#include "neuronet.h" // Assumes include paths are set up
#include <iostream>
#include <vector>
int main() {
// 1. Create a NeuroNet instance
NeuroNet::NeuroNet myNetwork;
// 2. Configure the network architecture
int inputSize = 10;
myNetwork.SetInputSize(inputSize); // Set the size of the input vector
// Add layers: ResizeNeuroNet(numberOfLayers)
// Then ResizeLayer(layerIndex, numberOfNeuronsInLayer)
myNetwork.ResizeNeuroNet(2); // Create a network with 2 layers
myNetwork.ResizeLayer(0, 15); // Layer 0 has 15 neurons
myNetwork.ResizeLayer(1, 5); // Layer 1 has 5 neurons (output layer)
// (Optional) Initialize weights and biases if not using GA
// You can use set_all_weights_flat() and set_all_biases_flat()
// For example, to randomize:
// NeuroNet::NeuroNetLayer tempLayer; // Use to get counts
// if (myNetwork.GetLayerCount() > 0) { // Make sure network is configured
// std::vector<float> random_weights;
// std::vector<float> random_biases;
// // This is a simplified way to get total counts.
// // In a real scenario, sum counts from each layer.
// // The GA handles this initialization internally for its population.
// }
// 3. Create an input matrix
Matrix::Matrix<float> inputMatrix(1, inputSize);
for (int i = 0; i < inputSize; ++i) {
inputMatrix[0][i] = static_cast<float>(i) * 0.1f; // Example input
}
// 4. Set input to the network
if (myNetwork.SetInput(inputMatrix)) {
// 5. Get the output
Matrix::Matrix<float> outputMatrix = myNetwork.GetOutput();
// Print the output
std::cout << "Network Output: " << std::endl;
for (int i = 0; i < outputMatrix.cols(); ++i) {
std::cout << outputMatrix[0][i] << " ";
}
std::cout << std::endl;
} else {
std::cerr << "Error setting input to the network." << std::endl;
}
return 0;
}
Using the GeneticAlgorithm
Here's a minimal example of how to use the GeneticAlgorithm:
#include "genetic_algorithm.h"
#include "neuronet.h"
#include <iostream>
#include <vector>
#include <numeric> // For std::accumulate
#include <cmath> // For std::fabs
// 1. Define a fitness function
// This function evaluates a NeuroNet and returns a fitness score (higher is better).
double simpleFitnessFunction(NeuroNet::NeuroNet& network) {
// Example: Try to get the network to output a sum of its inputs.
// This is a toy problem.
int inputSize = network.GetInputSize();
if (inputSize == 0) return 0.0;
Matrix::Matrix<float> testInput(1, inputSize);
float expectedSum = 0.0f;
for (int i = 0; i < inputSize; ++i) {
testInput[0][i] = static_cast<float>(i + 1); // e.g., 1, 2, 3...
expectedSum += testInput[0][i];
}
network.SetInput(testInput);
Matrix::Matrix<float> output = network.GetOutput();
if (output.cols() == 0) return 0.0; // No output produced
// Let's assume the network is expected to produce the sum in its first output neuron
float actualOutput = output[0][0];
// Fitness: inverse of the absolute difference. Maximize by minimizing difference.
double error = std::fabs(actualOutput - expectedSum);
return 1.0 / (1.0 + error); // Add 1 to avoid division by zero and normalize
}
int main() {
// 2. Create a template NeuroNet for the GA population
NeuroNet::NeuroNet templateNetwork;
int inputSize = 5;
templateNetwork.SetInputSize(inputSize);
templateNetwork.ResizeNeuroNet(2); // 2 layers
templateNetwork.ResizeLayer(0, 8); // Layer 0 with 8 neurons
templateNetwork.ResizeLayer(1, 1); // Layer 1 with 1 neuron (output layer for our fitness function)
// 3. Instantiate the GeneticAlgorithm
int populationSize = 50;
double mutationRate = 0.1;
double crossoverRate = 0.7;
int numGenerations = 100;
NeuroNet::GeneticAlgorithm ga(
populationSize,
mutationRate,
crossoverRate,
numGenerations,
templateNetwork // Pass the template network
);
// 4. Run the evolution process
std::cout << "Starting evolution..." << std::endl;
ga.run_evolution(simpleFitnessFunction);
std::cout << "Evolution finished." << std::endl;
// 5. Get the best individual
NeuroNet::NeuroNet bestNetwork = ga.get_best_individual();
// You can now use the bestNetwork, e.g., test its output or save its weights
std::cout << "Best individual's fitness can be re-evaluated (or stored during GA run): "
<< simpleFitnessFunction(bestNetwork) << std::endl;
// Example: Print weights of the first layer of the best network
if (bestNetwork.GetLayerCount() > 0) {
NeuroNet::LayerWeights weights = bestNetwork.get_all_layer_weights()[0];
std::cout << "Best network, Layer 0, first few weights: ";
for(int i=0; i < 5 && i < weights.WeightsVector.size(); ++i) {
std::cout << weights.WeightsVector[i] << " ";
}
std::cout << std::endl;
}
return 0;
}Key Components NeuroNet (src/neuronet.h, src/neuronet.cpp): The core class representing a neural network. Manages layers and network-level operations. NeuroNetLayer (src/neuronet.h, src/neuronet.cpp): Represents a single layer within a NeuroNet. Handles calculations for that layer. GeneticAlgorithm (src/genetic_algorithm.h, src/genetic_algorithm.cpp): Implements the genetic algorithm to train/evolve NeuroNet instances. Matrix (src/matrix.h): A header-only template library for matrix operations, used by NeuroNetLayer for calculations. Dependencies Google Test: Used for unit testing. It is fetched automatically by CMake via FetchContent during the build configuration. No manual installation is required.