What’s new in TensorFlow 2.0

Google’s TensorFlow 2.0 is now available in beta, with a focus on improving performance, ease, compatibility, and continuity

What's new in TensorFlow
Thinkstock

TensorFlow 2.0, the next major version of Google’s open source machine learning framework, is available in its first beta version.

TensorFlow, Google’s contribution to the world of machine learning and data science, is a general framework for quickly developing neural networks. Despite being relatively new, TensorFlow has already found wide adoption as a common platform for deep learning, due to its powerful abstractions and ease of use.

Where to download TensorFlow 

Installation instructions for TensorFlow on Ubuntu Linux, MacOS, and Microsoft Windows are available on the TensorFlow project page. Docker users can grab a prebuilt TensorFlow Docker image directly from Docker Hub. You can also compile the sources into a binary; the sources are available on GitHub.

TensorFlow 2.0 new features

In addition to addressing performance issues, the builders of TensorFlow 2.0 see it as an opportunity to correct mistakes in compatibility and continuity, which would be otherwise forbidden under semantic versioning.

A central feature in TensorFlow 2.0 is the “eager execution” environment, to align user expectations about the programming model better with TensorFlow practice. Introduced in TensorFlow 1.7, eager execution is an imperative programming environment that evaluates operations immediately without building graphs. Concrete values are returned instead of a computational graph to run later. Eager execution is intended to make the framework easier to learn and use. 

Other capabilities introduced in TensorFlow 2.0 include:

  • The ability to use the simpler, high-level Keras API to train and build models at a high level.
  • Expanded support for more platforms and languages.
  • Improved compatibility between platform and language components through standardization on exchange formats and alignment of APIs.
  • Removal of deprecated APIs, to reduce confusion among users.
  • The ability to automatically distribute training across all available devices without significant code changes, by way of a new API.
  • Support for TensorFlow Lite, the scaled-down version of TensorFlow that runs on smartphones and other resource-constrained hardware.

A series of public design reviews is planned for the upgrade. 

TensorFlow 1.0 code can run almost entirely unmodified in TensorFlow 2.0, but will not take advantage of TensorFlow 2.0 improvements. The best long-term strategy is to convert existing TensorFlow programs; TensorFlow 2.0 programs not only run faster but are often more concise. A provided conversion tool updates Python code to use APIs compatible with TensorFlow, and warns when an automatic conversion isn’t possible.

TensorFlow 2.0’s builders do not anticipate further feature development in the Version 1.x line, though there will be security patches for the Version 1.x line for one year following the release of Version 2.0.

TensorFlow 1.8 new features

New additions in May 2018’s TensorFlow 1.8 include:

  • The ability to prefetch data to GPU memory. This can speed up GPU operations where the data is known ahead of time, since it can then be copied to the GPU all at once.
  • Support for third-generation pipeline config for Cloud TPUs, “which improves performance and usability,” Google says. TPUs are hardware units available exclusively in Google Cloud that accelerate TensorFlow performance.
  • Contributed support for reading and writing protocol buffers from within Tensorflow, as well as support for older RPC communication, by way of the tf.contrib.proto and tf.contrib.rpc libraries.

TensorFlow 1.7 new features

Major new features in TensorFlow 1.7 include:

  • The introduction of eager execution, a programming model that evaluates operations immediately rather than building a graph and executing it later. Eager execution is useful for programming projects and environments where you want real-time feedback, for instance in a Python REPL.
  • The contributed module tf.contrib.data.SqlDataset allows a SQLite database to be read as a Dataset.
  • The tf.regex_replace module provides text processing with regular expression syntax. This way, string-type Tensors can be processed directly within TensorFlow, which is faster than doing so in Python or by using a third-party string library.
  • The addition of native TensorRT support in TensorFlow via the tf.contrib.tensorrt module. TensorRT is Nvidia’s “deep learning inference optimizer and runtime” that uses Nvidia GPUs to accelerate performance.

TensorFlow 1.6 new features

Major new additions in TensorFlow 1.6 include:

  • Changes to prebuilt binaries, which now use CUDA 9.0 and CuDNN 7, as well as the AVX instruction set. The latter change may break TensorFlow on what the TensorFlow team terms “older CPUs.” It’s likely this means anything prior to Intel’s Sandy Bridge and AMD’s Bulldozer processors, both of which shipped in 2011.
  • XLA now supports fast Fourier transform (FFT) functions.
  • CUDA-accelerated TensorFlow can now be built for Android devices using the Tegra chipset.

TensorFlow 1.5 new features

Major changes in TensorFlow 1.5 include:

  • TensorFlow Lite, a version of TensorFlow optimized for mobile and embedded devices, is available as a development preview. Models created for Lite sacrifice accuracy for speed and size, but the differences in accuracy can almost always be easily compensated for.
  • The XLA linear algebra compiler, which optimizes certain TensorFlow computations either ahead of time or just in time, has several optimizations.
  • The tf.contrib module gains a number of additions, such as tf.contrib.bayesflow.layers, an implementation of a probabilistic neural network (PNN).

TensorFlow 1.4 new features

New additions to TensorFlow 1.4 include:

  • The tf.keras API lets you use the Keras API, a neural network library that predates TensorFlow but is quickly being displaced by it. The TensorFlow Keras API is provided mainly for the sake of backwards compatibility, or for making it easier to port Keras to TensorFlow.
  • The tf.data or Dataset API provides a set of abstractions for creating and reusing input pipelines—potentially complex data sets gleaned from one or more sources, with each element transformed as needed. This is useful if you’re creating workflows that require multiple training passes or other complex internal logic.
  • If you have already been using the contributed version of the data API from the previous version of TensorFlow (tf.contrib.data), be warned that the official tf.data API isn’t perfectly backward-compatible. TensorFlow’s documentation has other details about how to migrate away from tf.contrib.data and use the official tf.data library instead
  • A new train_and_evaluate function provides a simple way to run TensorFlow’s Estimator (used to automatically configure common model parameters) in a distributed fashion across a cluster.
  • TensorFlow’s built-in debugging system now lets you execute arbitrary Python code in the debugger’s command line for quick-and-dirty inspection or modification.

Copyright © 2019 IDG Communications, Inc.