3 It starts with a tensor

 

This chapter covers

  • Understanding tensors, the basic data structure in PyTorch
  • Indexing and operating on tensors
  • Interoperating with NumPy multidimensional arrays
  • Moving computations to the GPU for speed

In the previous chapter, we took a tour of some of the many applications that deep learning enables. They invariably consisted of taking data in some form, like images or text, and producing data in another form, like labels, numbers, or more images or text. Viewed from this angle, deep learning really consists of building a system that can transform data from one representation to another. This transformation is driven by extracting commonalities from a series of examples that demonstrate the desired mapping. For example, the system might note the general shape of a dog and the typical colors of a golden retriever. By combining the two image properties, the system can correctly map images with a given shape and color to the golden retriever label, instead of a black lab (or a tawny tomcat, for that matter). The resulting system can consume broad swaths of similar inputs and produce meaningful output for those inputs.

3.1 The world as floating-point numbers

3.2 Tensors: Multidimensional arrays

3.2.1 From Python lists to PyTorch tensors

3.2.2 Constructing our first tensors

3.2.3 The essence of tensors

3.3 Indexing tensors

3.4 Named tensors

3.5 Tensor element types

3.5.1 Specifying the numeric type with dtype

3.5.2 A dtype for every occasion

3.5.3 Managing a tensor’s dtype attribute

3.6 The tensor API

3.7 Tensors: Scenic views of storage

3.7.1 Indexing into storage

3.7.2 Modifying stored values: In-place operations

3.8 Tensor metadata: Size, offset, and stride

sitemap