### Key Terms

- Tensor
- Graphs
- DistBelief
- CPU
- GPU
- TPU
- Edge TPU
- TensorFlow Lite
- Pixel Visual Core (PVC)

### Tensor

Tensorflow's name is directly referred from its core framework: Tensor. In Tensorflow, all the computations involve tensors. A tensor is a matrix or vector of n-dimensions that represents all types of data. All values in a tensor hold similar data type with a partially known or known shape. The shape of the data is the dimensional of the matrix or array.

A tensor can be created from the input data or the result of a computation. In TensorFlow, all the operations are conducted inside a graph. The graph is a set of computation that occurs successively. Each operation is called an op-node and it is connected to each other.

The graph summarizes the ops and connections between the nodes. Although, it does not display the values.

### Graphs

TensorFlow makes use of a graph framework. The graph collects and characterizes all the series computations done during the training. The graph has lots of advantages:

- It was done to run on several CPUs or GPUs and even on mobile operating system.
- The portability of the graph enables to keep the computations for immediate or later use. The graph can be saved to be executed in the forthcoming.
- All the computations in the graph are done by connecting tensors together.
- A tensor has a node and an edge. The node carries the mathematical operation and produces endpoints outputs. The edges explain the input/output relationships between nodes.

### DistBelief

Starting in 2011, Google Brain developed DistBelief as a machine-learning system based on deep learning neural networks. Its use grew rapidly throughout diverse Alphabet companies in both research and commercial applications. Google appointed several computer scientists, involving Jeff Dean, to ease and refactor the code-base of DistBelief into a faster, more robust application-grade library, which became TensorFlow. In 2009, the team, led by Geoffrey Hinton, had implemented generic back-propagation and different improvements that permitted the generation of neural networks with significantly higher accuracy, for instance, a 25% decrease of errors in speech recognition.

### CPU

**Central Processing Unit** abbreviation CPU, is the electronic chip, which operate as brains of the computer that carry out the basic arithmetic (addition, subtraction, multiplication and division), logical, control and input or output operations given by the instructions of a computer program.

### GPU

GPU, the **Graphics Processing Unit** is a specialized electronic chip designed to render 2-Dimensional and 3-Dimensional graphics jointly with a CPU. . Now GPUs are being harnessed more broadly to speed up computational workloads in areas like financial modeling, cutting-edge scientific research, deep learning, analytics and oil, and gas exploration, etc.

### TPU

In May 2016, Google released Tensor Processing Unit (TPU), an application-specific integrated chip built especially for machine learning and tailored for TensorFlow. Google stated they had been running TPUs inside their data centers for more than a year and had found them to deliver an order of extent better-optimized performance per watt for machine learning.

In May 2017, Google released the second-generation, including the availability of the TPUs on Google Compute Engine. The second-generation TPUs provides up to 180 teraflops of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 petaflops.

In May 2018, Google released the third-generation TPUs provides up to 420 teraflops of performance and 128 GB HBM. Cloud TPU v3 Pods offer 100+ petaflops of performance and 32 TB HBM.

### Edge TPU

In July 2018, the Edge TPU was announced. Edge TPU is specially built Application-specific integrated circuit (ASIC) chip developed to execute TensorFlow Lite machine learning (ML) models on small client computing devices like smartphones referred to as edge computing.

### TensorFlow Lite

In May 2017, Google released a software stack for mobile development, TensorFlow Lite. In January 2019, the TensorFlow team released a developer preview of the mobile GPU inference engine with Metal Compute Shaders on iOS devices and OpenGL ES 3.1 Compute Shaders on Android devices.

### Pixel Visual Core (PVC)

In October 2017, Google released the Google Pixel 2 which featured their Pixel Visual Core (PVC), a fully programmable image, vision, and AI processor for mobile devices. The Pixel Visual Core (PVC) endorses TensorFlow for machine learning and Halide for image processing.