Learn PyTorch with project-based tutorials. So far they are focused on applying recurrent neural networks to natural language tasks.
These tutorials aim to:
- Acheive specific goals with minimal parts
- Demonstrate modern techniques with common data
- Use low level but low complexity models
- Reach for readablity over efficiency
- Classifying Names with a Character-Level RNN
- Generating Names with a Character-Level RNN
- Translation with a Sequence to Sequence Network and Attention
- WIP Intent Parsing and Slot Filling with Pointer Networks
I assume you have at least installed PyTorch, know Python, and understand Tensors:
- http://pytorch.org/ For installation instructions
- Deep Learning with PyTorch: A 60-minute Blitz to get started with PyTorch in general
- jcjohnson's PyTorch examples for a wide and deep overview
- Introduction to PyTorch for former Torchies if you are former Lua Torch user
It would also be useful to know about Recurrent Neural Networks and how they work:
- The Unreasonable Effectiveness of Recurrent Neural Networks shows a bunch of real life examples
- Deep Learning, NLP, and Representations for an overview on word embeddings and RNNs for NLP
- Understanding LSTM Networks is about LSTMs specifically but also informative about RNNs in general
And the papers that introduced many of these topics:
- Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
- Sequence to Sequence Learning with Neural Networks
- Neural Machine Translation by Jointly Learning to Align and Translate
- A Neural Conversational Model
The quickest way to run these on a fresh Linux or Mac machine is to install Anaconda:
curl -LO https://repo.continuum.io/archive/Anaconda3-4.3.0-Linux-x86_64.sh
bash Anaconda3-4.3.0-Linux-x86_64.sh
Then install PyTorch:
conda install pytorch -c soumith
Then clone this repo and start Jupyter Notebook:
git clone http://github.com/spro/practical-pytorch
cd practical-pytorch
jupyter notebook