mirror of
https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
synced 2025-08-14 01:13:00 +08:00
LabML Neural Networks
This is a collection of simple PyTorch implementation of various neural network architectures and layers. We will keep adding to this.
Modules
✨ Transformers
Transformers module contains implementations for multi-headed attention and relative multi-headed attention.
✨ Recurrent Highway Networks
✨ LSTM
✨ Capsule Networks
✨ Generative Adversarial Networks
✨ Sketch RNN
✨ Reinforcement Learning
- Proximal Policy Optimization with Generalized Advantage Estimation
- Deep Q Networks with with Dueling Network, Prioritized Replay and Double Q Network.
Installation
pip install labml_nn
Citing LabML
If you use LabML for academic research, please cite the library using the following BibTeX entry.
@misc{labml,
author = {Varuna Jayasiri, Nipun Wijerathne},
title = {LabML: A library to organize machine learning experiments},
year = {2020},
url = {https://lab-ml.com/},
}
Description
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
attentiondeep-learningdeep-learning-tutorialganliterate-programmingloramachine-learningneural-networksoptimizerspytorchreinforcement-learningtransformertransformers
Readme
152 MiB
Languages
Python
89.4%
Jupyter Notebook
10.5%
Makefile
0.1%