3 It Starts with a Tensor

 

This chapter covers:

  • Tensors, the basic data structure in PyTorch
  • Indexing and operating on PyTorch tensors to explore and manipulate data
  • Interoperating with NumPy multidimensional arrays
  • Moving computations to the GPU for speed

In the previous chapter we took a tour of some of the many applications deep learning enables. They invariably consisted in taking data in some form, like images or text, and producing data in another form, like labels, numbers, text, or more images. Taken from this angle, deep learning really consists of building a system that can transform data from one representation to another. This transformation is driven by extracting commonalities from a series of examples that demonstrate the desired mapping. For example, the system might note the general shape of a dog, and the typical colors of a golden retriever. By combining the two image properties, the system can correctly map images with a given shape and color to the golden retriever label, instead of a black lab (or a tawny tomcat, for that matter). The resulting system can consume broad swathes of similar inputs and produce meaningful output for those inputs.

3.1  Tensors are multi-dimensional arrays

3.1.1  From Python lists to PyTorch tensors

3.1.2  Constructing our first tensors

3.1.3  The essence of tensors

3.2  Indexing Tensors

3.3  Named Tensors

3.4  Tensor element types

3.4.1  Specifying the numeric type with dtype

3.4.2  A dtype for every occasion

3.4.3  Managing a tensor’s dtype attribute

3.5  The tensor API

3.6  Tensors — scenic views on storage

3.6.1  Indexing into storage

3.6.2  Modifying Stored Values — Inplace Operations

3.7  Tensor metadata: size, offset, stride

3.7.1  Views over another tensor’s storage

3.7.2  Transposing without copying

3.7.3  Transposing in higher dimensions

3.7.4  Contiguous tensors

3.8  NumPy interoperability