Skip to main content

PyTorch

PyTorch is an open source machine learning library based on the Torch library, and is used for applications such as computer vision and natural language processing. PyTorch was primarily developed by Facebook's AI Research lab (FAIR). It is free and open-source software released under the Modified BSD license.

Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface. A number of pieces of deep learning software are built on top of PyTorch, including Tesla Autopilot, Uber's Pyro, Hugging Face's Transformers, PyTorch Lightning, and Catalyst.

PyTorch was initially built to be flexible and modular for research, with the stability and support needed for production deployment. PyTorch provides us a Python package for high-level features like tensor computation (like NumPy) with strong GPU acceleration and TorchScript for an easy transition between eager mode and graph mode and with the latest release of PyTorch, the framework provides graph-based execution, distributed training, mobile deployment, and quantization.

PyTorch as a Python package provides two high-level features:

  • Tensor computation (like NumPy) with strong acceleration via graphics processing units (GPU).
  • Deep neural networks built on a type-based automatic differentiation system.

PyTorch features:

The major features of PyTorch are mentioned below:

  • Easy Interface - PyTorch offers us easy to use API; hence it is considered to be very simple to operate and run on Python. The code execution in this framework is also quite easy.
  • Python usage - PyTorch library is considered to be Pythonic which smoothly integrates with the Python data science stack. Thus, it can leverage all the services and functionalities offered by the Python environment.
  • Computational graphs - PyTorch provides an excellent platform which offers dynamic computational graphs. Thus a user like us can change them during runtime and this is highly useful when a developer has no idea of how much memory is required for creating a neural network model.

PyTorch is known for having three levels of abstraction as given below βˆ’

  • Tensor - Imperative n-dimensional array which runs on GPU.
  • Variable - Node in computational graph. This stores data and gradient.
  • Module βˆ’ Neural network layer which will store state or learnable weights.

Advantages of using PyTorch:

  • It is easy to debug and understand the code.
  • It includes many layers as Torch.
  • It can be considered as NumPy extension to GPUs.
  • It allows building networks whose structure is dependent on computation itself.

Get Started with PyTorch

Cons of using PyTorch:

  • PyTorch was released in 2016, so it’s new compared to others and has fewer users, and is not widely known.
  • Absence of monitoring and visualization tools like a tensor board.
  • The developer community is small compared to other frameworks.

Contributing:

If anyone interested in being a contributor and want to get involved in developing the PyTorch framework, can see CONTRIBUTING for details on contributing on PyTorch.

PyTorch has a 90-day release cycle (major releases) and appreciate all type of contributions. If anyone is planning to contribute back bug-fixes, they can do so without any further discussion.

Communication:

  • Forums: Discuss implementations, research, etc. https://discuss.pytorch.org
  • Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is PyTorch Forums. If anyone need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1
  • Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up fot newsletter here: https://eepurl.com/cbG0rv
  • Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch

References: