.. _sphx_glr_beginner_former_torchies_tensor_tutorial.py: Tensors ======= Tensors behave almost exactly the same way in PyTorch as they do in Torch. Create a tensor of size (5 x 7) with uninitialized memory: .. code-block:: python import torch a = torch.FloatTensor(5, 7) Initialize a tensor randomized with a normal distribution with mean=0, var=1: .. code-block:: python a = torch.randn(5, 7) print(a) print(a.size()) .. note:: ``torch.Size`` is in fact a tuple, so it supports the same operations Inplace / Out-of-place ---------------------- The first difference is that ALL operations on the tensor that operate in-place on it will have an ``_`` postfix. For example, ``add`` is the out-of-place version, and ``add_`` is the in-place version. .. code-block:: python a.fill_(3.5) # a has now been filled with the value 3.5 b = a.add(4.0) # a is still filled with 3.5 # new tensor b is returned with values 3.5 + 4.0 = 7.5 print(a, b) Some operations like ``narrow`` do not have in-place versions, and hence, ``.narrow_`` does not exist. Similarly, some operations like ``fill_`` do not have an out-of-place version, so ``.fill`` does not exist. Zero Indexing ------------- Another difference is that Tensors are zero-indexed. (In lua, tensors are one-indexed) .. code-block:: python b = a[0, 3] # select 1st row, 4th column from a Tensors can be also indexed with Python's slicing .. code-block:: python b = a[:, 3:5] # selects all rows, 4th column and 5th column from a No camel casing --------------- The next small difference is that all functions are now NOT camelCase anymore. For example ``indexAdd`` is now called ``index_add_`` .. code-block:: python x = torch.ones(5, 5) print(x) .. code-block:: python z = torch.Tensor(5, 2) z[:, 0] = 10 z[:, 1] = 100 print(z) .. code-block:: python x.index_add_(1, torch.LongTensor([4, 0]), z) print(x) Numpy Bridge ------------ Converting a torch Tensor to a numpy array and vice versa is a breeze. The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other. Converting torch Tensor to numpy Array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python a = torch.ones(5) print(a) .. code-block:: python b = a.numpy() print(b) .. code-block:: python a.add_(1) print(a) print(b) # see how the numpy array changed in value Converting numpy Array to torch Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b) # see how changing the np array changed the torch Tensor automatically All the Tensors on the CPU except a CharTensor support converting to NumPy and back. CUDA Tensors ------------ CUDA Tensors are nice and easy in pytorch, and transfering a CUDA tensor from the CPU to GPU will retain its underlying type. .. code-block:: python # let us run this cell only if CUDA is available if torch.cuda.is_available(): # creates a LongTensor and transfers it # to GPU as torch.cuda.LongTensor a = torch.LongTensor(10).fill_(3).cuda() print(type(a)) b = a.cpu() # transfers it to CPU, back to # being a torch.LongTensor **Total running time of the script:** ( 0 minutes 0.000 seconds) .. container:: sphx-glr-footer .. container:: sphx-glr-download :download:`Download Python source code: tensor_tutorial.py ` .. container:: sphx-glr-download :download:`Download Jupyter notebook: tensor_tutorial.ipynb ` .. rst-class:: sphx-glr-signature `Generated by Sphinx-Gallery `_