Pytorch Lightning: simplifying deep learning

When it comes to training neuralnetworks in Python, one of the leading frameworks is Pytorch. It has a simple interface, similar to NumPy. However, there are many aspects to training (applying loss functions, updating weights, calculating metrics on train, validation or test datasets, encoding the same datasets, dataloaders, writing the training loop, switch gradient tracking on or off, and plenty more) that follow standardised layouts, leading to lots of repetition between different training logics—often referred to as “boilerplate”.

In Henesis we’ve been using Pytorch Lightning for some time: an extension of Pytorch that encapsulates all the above, and more, into simple classes (LightningModule, DataModule, Trainer) and methods, greatly reducing our boilerplate code. Moreover it enables easy and flexible result-logging, and hides an immense amount of efficiency-saving code, making our trainings more transparent and faster!

The benefit to our ML team has been huge: we can rapidly and consistently write neural network experiments, and reduce the risk of forgetting simple steps in the training logic.

» Learn more: