Author: Soumith Chintala

In [1]:
%matplotlib inline

What is PyTorch?

It’s a Python based scientific computing package targeted at two sets of audiences:

  • A replacement for NumPy to use the power of GPUs
  • a deep learning research platform that provides maximum flexibility and speed

Getting Started

Tensors ^^^^^^^

Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.

In [2]:
from __future__ import print_function
import torch

Construct a 5x3 matrix, uninitialized:

In [3]:
x = torch.empty(5, 3)
print(x)
 0.0000  0.0000  0.0000
 0.0000  0.0373  0.0000
 0.0000  0.0000  0.0000
 0.0000  0.5614  0.0000
 0.5495  0.0000  0.0000
[torch.FloatTensor of size (5,3)]

Construct a randomly initialized matrix:

In [4]:
x = torch.rand(5, 3)
print(x)
 0.8082  0.4229  0.0403
 0.9461  0.5608  0.8760
 0.2697  0.0204  0.0426
 0.2242  0.9969  0.1969
 0.4139  0.9685  0.7692
[torch.FloatTensor of size (5,3)]

Construct a matrix filled zeros and of dtype long:

In [5]:
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
 0  0  0
 0  0  0
 0  0  0
 0  0  0
 0  0  0
[torch.LongTensor of size (5,3)]

Construct a tensor directly from data:

In [6]:
x = torch.tensor([5.5, 3])
print(x)
 5.5000
 3.0000
[torch.FloatTensor of size (2,)]

or create a tensor basing on existing tensor. These methods will reuse properties of the input tensor, e.g. dtype, unless new values are provided by user

In [7]:
x = x.new_ones(5, 3, dtype=torch.double)      # new_* methods take in sizes
print(x)

x = torch.randn_like(x, dtype=torch.float)    # override dtype!
print(x)                                      # result has the same size
 1  1  1
 1  1  1
 1  1  1
 1  1  1
 1  1  1
[torch.DoubleTensor of size (5,3)]


 0.5581 -0.8885 -0.8123
 0.1519 -0.3300  0.1263
 2.7811 -0.2736 -0.6294
 0.0645 -0.5117  1.8290
 0.7962 -0.5353  1.5416
[torch.FloatTensor of size (5,3)]

Get its size:

In [8]:
print(x.size())
torch.Size([5, 3])

Note

``torch.Size`` is in fact a tuple, so it supports all tuple operations.

Operations ^^^^^^^^^^ There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation.

Addition: syntax 1

In [9]:
y = torch.rand(5, 3)
print(x + y)
 0.7348 -0.8351 -0.6750
 0.5426  0.3989  1.0624
 3.6226  0.3536 -0.0133
 0.1708  0.4361  2.1224
 1.3221  0.0716  2.4052
[torch.FloatTensor of size (5,3)]

Addition: syntax 2

In [10]:
print(torch.add(x, y))
 0.7348 -0.8351 -0.6750
 0.5426  0.3989  1.0624
 3.6226  0.3536 -0.0133
 0.1708  0.4361  2.1224
 1.3221  0.0716  2.4052
[torch.FloatTensor of size (5,3)]

Addition: providing an output tensor as argument

In [11]:
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
 0.7348 -0.8351 -0.6750
 0.5426  0.3989  1.0624
 3.6226  0.3536 -0.0133
 0.1708  0.4361  2.1224
 1.3221  0.0716  2.4052
[torch.FloatTensor of size (5,3)]

Addition: in-place

In [12]:
# adds x to y
y.add_(x)
print(y)
 0.7348 -0.8351 -0.6750
 0.5426  0.3989  1.0624
 3.6226  0.3536 -0.0133
 0.1708  0.4361  2.1224
 1.3221  0.0716  2.4052
[torch.FloatTensor of size (5,3)]

Note

Any operation that mutates a tensor in-place is post-fixed with an ``_``. For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.

You can use standard NumPy-like indexing with all bells and whistles!

In [13]:
print(x[:, 1])
-0.8885
-0.3300
-0.2736
-0.5117
-0.5353
[torch.FloatTensor of size (5,)]

Resizing: If you want to resize/reshape tensor, you can use torch.view:

In [14]:
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])

If you have a one element tensor, use .item() to get the value as a Python number

In [15]:
x = torch.randn(1)
print(x)
print(x.item())
 1.4785
[torch.FloatTensor of size (1,)]

1.47847104073

Read later:

100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described here <http://pytorch.org/docs/torch>_.

NumPy Bridge

Converting a Torch Tensor to a NumPy array and vice versa is a breeze.

The Torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other.

Converting a Torch Tensor to a NumPy Array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In [16]:
a = torch.ones(5)
print(a)
 1
 1
 1
 1
 1
[torch.FloatTensor of size (5,)]

In [17]:
b = a.numpy()
print(b)
[1. 1. 1. 1. 1.]

See how the numpy array changed in value.

In [18]:
a.add_(1)
print(a)
print(b)
 2
 2
 2
 2
 2
[torch.FloatTensor of size (5,)]

[2. 2. 2. 2. 2.]

Converting NumPy Array to Torch Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See how changing the np array changed the Torch Tensor automatically

In [19]:
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
[2. 2. 2. 2. 2.]

 2
 2
 2
 2
 2
[torch.DoubleTensor of size (5,)]

All the Tensors on the CPU except a CharTensor support converting to NumPy and back.

CUDA Tensors

Tensors can be moved onto any device using the .to method.

In [21]:
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
    y = torch.ones_like(x).cuda()
    x = x.cuda()               # or just use strings ``.to("cuda")``
    z = x + y
    print(z)
    print(z.cpu())       # ``.to`` can also change dtype together!
 2.4785
[torch.cuda.FloatTensor of size (1,) (GPU 0)]


 2.4785
[torch.FloatTensor of size (1,)]