TensorFlow is the most widely used open-source symbolic math library for machine learning applications (such as deep learning) built by Google. It is used by the machine learning community for both research and production. This tutorial will help you understand basics and build your own TensorFlow models in no time!

The 2019 Kaggle ML & DS Survey showed that TensorFlow ranked 2nd after Scikit-learn in most generally used machine learning frameworks among machine learning engineers and data scientists.

A visualization from 2019 Kaggle ML & DS Survey

The tutorial below starts with the basics of TensorFlow 2.0 to creating neural network models. If you are used to with TensorFlow earlier versions or completely new to TensorFlow well you landed on the right place. Let’s get started from the installation process.

Installation of TensorFlow

TensorFlow is tested and supported on the following 64-bit systems:

- Python 3.5–3.7 
- Ubuntu 16.04 or later 
- Windows 7 or later 
- macOS 10.12.6 (Sierra) or later (no GPU support) 
- Raspbian 9.0 or later 
  1. Install using pip on Python

TensorFlow can be installed using Python’s pip:

# Requires the latest pip 
# TensorFlow 2 packages require a pip version >19.0. 

pip install --upgrade pip

# Current stable release for CPU and GPU
pip install tensorflow
 OR pip install --upgrade tensorflow 

# Or try the preview build (unstable)
 pip install tf-nightly

# Install alpha0 version of 2.0
 pip install tensorflow==2.0.0-alpha0 

2. Run a TensorFlow container

You can use TensorFlow through a docker container as well:

docker pull tensorflow/tensorflow  # Download latest stable image 

docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu-jupyter  # Start Jupyter server 

TensorFlow 2.0 features highlights

Source: TensorFlow Youtube channel

Some major features highlights:

Basics of TensorFlow 2.0

Variable & Constants:

Variable and constants are easy to create and represent in TensorFlow 2.0. Variable and constants can be created using variable() and constant() objects derived from TensorFlow’s core object as shown in the coding example below:

import tensorflow as tf 
import numpy as np

# check tensorflow version before you get started 
# tf.version 

'''
#check these if you want
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE")
'''

'''
We can define the variable within certain scope name here below my is a scope 
(note this in the output below)
'''

with tf.name_scope("my"):
     variable = tf.Variable(8848) #Variable defined

string = tf.constant("Hello world I Tensorflow 2.0") #Constant defined
print("tensor:", variable) #directly print out what tensor object looks like
print("value:", variable.numpy()) # OR convert tensor object to numpy value or simply its value
print(string.numpy())


Output:

tensor: <tf.Variable 'my/Variable:0' shape=() dtype=int32, numpy=8848>
value: 8848
b'Hello world I Tensorflow 2.0'

Operations on Tensor Data structure:

In this example, we’ll further explore variables & constants to create 2d arrays or matrix and do operations with it.

#lets us do some operations on tensor objects

a = tf.constant(10)
b = tf.constant(11)

print("a + b :" , a.numpy() + b.numpy())
print("Addition with constants: ", a+b)
print("Addition with constants: ", tf.add(a, b))
print("a * b :" , a.numpy() * b.numpy())
print("Multiplication with constants: ", a*b)
print("Multiplication with constants: ", tf.multiply(a, b))

#create a pre value assigned matrix 2d array
a = tf.ones([2,3])
print(a)
a = tf.Variable(a)
a[0,0].assign(0)   # assigning value 0
print(a.numpy())
b=tf.zeros([3,3])
print(b)
b= tf.Variable(b)
b[1,1].assign(1)   # assigning value 1
print(b)

matrix1 = tf.constant([[3., 3.]])

# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])

product = tf.matmul(matrix1, matrix2)
print("Multiplication with matrixes:", product)

print("broadcast matrix in Multiplication:", matrix1 * matrix2)

Output:

a + b : 21
Addition with constants:  tf.Tensor(21, shape=(), dtype=int32)
Addition with constants:  tf.Tensor(21, shape=(), dtype=int32)
a * b : 110
Multiplication with constants:  tf.Tensor(110, shape=(), dtype=int32)
Multiplication with constants:  tf.Tensor(110, shape=(), dtype=int32)
tf.Tensor(
[[1. 1. 1.]
 [1. 1. 1.]], shape=(2, 3), dtype=float32)
[[0. 1. 1.]
 [1. 1. 1.]]
tf.Tensor(
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]], shape=(3, 3), dtype=float32)
<tf.Variable 'Variable:0' shape=(3, 3) dtype=float32, numpy=
array([[0., 0., 0.],
       [0., 1., 0.],
       [0., 0., 0.]], dtype=float32)>
Multiplication with matrixes: tf.Tensor([[12.]], shape=(1, 1), dtype=float32)
broadcast matrix in Multiplication: tf.Tensor(
[[6. 6.]
 [6. 6.]], shape=(2, 2), dtype=float32)

Build an Image Classification model using TensorFlow 2.0

Overview of steps to follow:

  1. Install &Import necessary libraries/modules (tensorflow,numpy,matplotlib)
  2. Get preprocessed dataset (about simple fashion images data) from Keras dataset
  3. Decode the preprocessed data into human understandable form to see what’s in it
  4. Develop a neural network model (sequential) and train on the dataset
  5. Predict on test data and evaluate the model
  6. Store the model and retrieve it
#importing necessary libraries and datasets

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
layers = tf.keras.layers
data=tf.keras.datasets.fashion_mnist # taking preprocessed dataset related to fashion images 
#Splitting dataset into test , train 
'''
Note: The Dataset contains 60,000 28x28 grayscale images of 10 fashion categories, along with a test set of 10,000 images see doc for further details
'''
(x_train,y_train),(x_test,y_test)=data.load_data() 
#Shrunk down or scale the data
x_train,x_test=x_train/255.0 , x_test/255.0 
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

print("See what our dataset looks like.. go ahead and try for one image below ")
'''
# uncomment these to see what data looks like
# it shows image in matrix 
print("Image in matrix:\n")
print(x_train[1])
print("Image class:\n")
print(x_test[1])
'''
print("\nBut what it actually represents ...")

#decoding image from matrix of data
plt.figure(figsize=(10,10))
for i in range(25):
 plt.subplot(5,5,i+1)
 plt.xticks([])
 plt.yticks([])
 plt.grid(False)
 plt.imshow(x_train[i], cmap=plt.cm.binary)
 plt.xlabel(class_names[y_train[i]])
plt.show()

Images in the dataset:

# Using keras to create neural nets model
model=keras.Sequential([
    layers.Flatten(input_shape=(28,28)),
    layers.Dense(128,activation="relu"),
    layers.Dense(10,activation="softmax")
    
])

model.compile(optimizer='adam',loss="sparse_categorical_crossentropy",metrics=["accuracy"])
print(model.summary())
model.fit(x_train,y_train,epochs=20)


prediction=model.predict(x_test)

#visualizing predictions
for i in range(2):
    plt.grid(False)
    plt.imshow(x_test[i],cmap=plt.cm.binary)
    plt.xlabel("Actual:"+class_names[y_test[i]])
    plt.title("Prediction:"+class_names[np.argmax(prediction[i])])
    plt.show()
    

Output:

Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten_6 (Flatten)          (None, 784)               0         
_________________________________________________________________
dense_12 (Dense)             (None, 128)               100480    
_________________________________________________________________
dense_13 (Dense)             (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/20
60000/60000 [==============================] - 27s 453us/sample - loss: 0.5025 - accuracy: 0.8229
Epoch 2/20
60000/60000 [==============================] - 30s 496us/sample - loss: 0.3727 - accuracy: 0.8651
Epoch 3/20
60000/60000 [==============================] - 41s 687us/sample - loss: 0.3354 - accuracy: 0.8765
..........
Epoch 20/20
60000/60000 [==============================] - 36s 601us/sample - loss: 0.1792 - accuracy: 0.932450s 


# saving model for future use

# save model and architecture to single file
model.save("model.h5") #store model contents to a file (Note: There are other extensions like .pickle, .pkl)
# load model
from tf.keras.models import load_model
model = load_model('model.h5')
model.summary()
Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten_6 (Flatten)          (None, 784)               0         
_________________________________________________________________
dense_12 (Dense)             (None, 128)               100480    
_________________________________________________________________
dense_13 (Dense)             (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0

Build a text classifier using TensorFlow 2.0

Overview of steps to follow:

  1. Install and import necessary libraries/modules (tensorflow,numpy,matplotlib)
  2. Get preprocessed dataset(about IMDB Movie reviews) from Keras dataset
  3. Decode the preprocessed data into human understandable form to see what’s in it
  4. Develop a neural network model (sequential) and train it with the dataset
  5. Predict on test data and evaluate the model
  6. Store model and retrieve that model
#importing necessary libraries and datasets

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
layers = tf.keras.layers
prep=tf.keras.preprocessing  #contains modules for preprocessing datasets
data=tf.keras.datasets.imdb  #preprocessed dataset about imdb reviews (positive&Negative)

(x_train,y_train),(x_test,y_test)= data.load_data(num_words=10000)

#if this gives error check numpy version and set it to 1.16.0

#lets look at dataset  
print(x_train[0])     #its integer encoded word ie every number points to certain word

# We need mapping from numeric value to human readable form

word_index= data.get_word_index()#gives out the required mappings(where key is word value is integer represenation )
word_index={k:(v+3) for k,v in word_index.items()} #note v+3 for special keys mapping shown below

# Some special keys in text are
word_index["<PAD>"]=0      #used for padding
word_index["<START>"]=1    #indentifies start
word_index["<UNK>"]=2   #unknown char
word_index["<UNUSED>"]=3 

reverse_word_index=dict([(value,key) for (key,value) in word_index.items()])#dictionary that reverses the word_index

def decode(text): #decode to words
    return " ".join([reverse_word_index.get(i,"?") for i in text])

for i in range(1):
    print(decode(x_test[i]))
    

Output:

[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
<START> please give this one a miss br br <UNK> <UNK> and the rest of the cast rendered terrible performances the show is flat flat flat br br i don't know how michael madison could have allowed this one on his plate he almost seemed to know this wasn't going to work out and his performance was quite <UNK> so all you madison fans give this a miss
#doing some preprocessings :padding to make all reviews same length 250

x_train=prep.sequence.pad_sequences(x_train,value=word_index["<PAD>"],padding="post",maxlen=250) 
x_test=prep.sequence.pad_sequences(x_test,value=word_index["<PAD>"],padding="post",maxlen=250)

print(x_test[0])
#Creating neural nets keras model (Sequential)

model=keras.Sequential()
model.add(layers.Embedding(10000,16))
model.add(layers.GlobalAveragePooling1D())
model.add(layers.Dense(16,activation="relu"))
model.add(layers.Dense(16,activation="sigmoid"))

model.summary()
model.compile(optimizer='adam',loss="hinge", metrics=['accuracy'])


fitted_model=model.fit(x_train,y_train,epochs=20,batch_size=5,validation_data=(x_test,y_test),verbose=1)

results=model.evaluate(x_test,y_test)
print(results)

#let us test for one review
test1=x_test[0]
predict=model.predict([test1])
print("Review:\n")
print(decode(test1))
print("Actual:\n")
print(str(y_test[0]))
print("Model Predicted:\n")
print(str(predict[0][0]))

Output:

[   1  591  202   14   31    6  717   10   10    2    2    5    4  360
    7    4  177 5760  394  354    4  123    9 1035 1035 1035   10   10
   13   92  124   89  488 7944  100   28 1668   14   31   23   27 7479
   29  220  468    8  124   14  286  170    8  157   46    5   27  239
   16  179    2   38   32   25 7944  451  202   14    6  717    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0]
Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_6 (Embedding)      (None, None, 16)          160000    
_________________________________________________________________
global_average_pooling1d_6 ( (None, 16)                0         
_________________________________________________________________
dense_12 (Dense)             (None, 16)                272       
_________________________________________________________________
dense_13 (Dense)             (None, 16)                272       
=================================================================
Total params: 160,544
Trainable params: 160,544
Non-trainable params: 0
_________________________________________________________________
Train on 25000 samples, validate on 25000 samples
Epoch 1/10
25000/25000 [==============================] - 103s 4ms/sample - loss: 0.7186 - accuracy: 0.0744 - val_loss: 0.6372 - val_accuracy: 0.0208
..........
Epoch 20/20
25000/25000 [==============================] - 80s 3ms/sample - loss: 0.5343 - accuracy: 0.8878 - val_loss: 0.6342 - val_accuracy: 0.8211
25000/25000 [==============================] - 4s 153us/sample - loss: 0.6342 - accuracy: 0.8208
[0.6341762409973144, 0.8208]
Review:
<START> please give this one a miss br br <UNK> <UNK> and the rest of the cast rendered terrible performances the show is flat flat flat br br i don't know how michael madison could have allowed this one on his plate he almost seemed to know this wasn't going to work out and his performance was quite <UNK> so all you madison fans give this a miss <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
Actual:

0
Model Predicted:

0.0
# Store model for future use

# save model and architecture to single file
model.save("model.h5") #store model contents to a file (Note: There are other extensions like .pickle, .pkl)
# load model
from tf.keras.models import load_model
model = load_model('model.h5')
model.summary()
Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_6 (Embedding)      (None, None, 16)          160000    
_________________________________________________________________
global_average_pooling1d_6 ( (None, 16)                0         
_________________________________________________________________
dense_12 (Dense)             (None, 16)                272       
_________________________________________________________________
dense_13 (Dense)             (None, 16)                272       
=================================================================
Total params: 160,544
Trainable params: 160,544
Non-trainable params: 0

Useful References

  • Tutorials from TensorFlow – Consists of well-written tutorials on Jupyter notebook for beginners to expert along with some nice video explanations.
  • A tutorial from freeCodeCamp – Consists of shortly explained neural network model concepts along with TensorFlow 2.0 tutorial on image and text classification.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here