This project is an implementation of a Neural Networks library from scratch, using only Python and Numpy. It is inspired by the old Lua versions of Torch, before the introduction of autograd.
- Implementation of essential modules such as linear layers, 1D convolutions, and more.
- Continuous integration and deployment of documentation with mathematical explanations.
- Efficient computation by avoiding for loops through advanced use of Numpy.
- Clean and well-structured code.
- A detailed report showcasing various examples and experiments using different architectures (available in the
scripts
andnotebooks
folders).
The documentation for this project is generated using Sphinx and is available here. It includes detailed explanations of the implemented modules, usage examples, and mathematical foundations.
To install the required dependencies, run:
pip install -r requirements.txt
To use the library, simply import the necessary modules from the src
directory. For example:
from src.activation import Sigmoid, ReLU
from src.linear import Linear
from src.loss import BCELoss, MSELoss
Made with ❤️ by @dataymeric & @CharlesAttend during our first years of Master DAC at Sorbonne University.
This project is licensed under the MIT License - see the LICENSE file for details.