Data Loading...

9781838823412 Flipbook PDF


106 Views
80 Downloads
FLIP PDF 1.87MB

DOWNLOAD FLIP

REPORT DMCA

EXPERT INSIGHT

Deep Learning with TensorFlow 2 and Keras Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API

Second Edition Antonio Gulli, Amita Kapoor, Sujit Pal FOR SALE IN INDIA ONLY

Deep Learning with TensorFlow 2 and Keras Second Edition

Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API

Antonio Gulli Amita Kapoor Sujit Pal

BIRMINGHAM - MUMBAI

Deep Learning with TensorFlow 2 and Keras Second Edition Copyright © 2019 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Commissioning Editor: Amey Varangaonkar Acquisition Editors: Yogesh Deokar, Ben Renow-Clarke Acquisition Editor – Peer Reviews: Suresh Jain Content Development Editor: Ian Hough Technical Editor: Gaurav Gavas Project Editor: Janice Gonsalves Proofreader: Safis Editing Indexer: Rekha Nair Presentation Designer: Sandip Tadge First published: April 2017 Second edition: December 2019 Production reference: 2130320 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-83882-341-2 www.packt.com

packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe? •

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals



Learn better with Skill Plans built especially for you



Get a free eBook or video every month



Fully searchable for easy access to vital information



Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.Packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details. At www.Packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors About the authors Antonio Gulli has a passion for establishing and managing global technological talent, for innovation and execution. His core expertise is in cloud computing, deep learning, and search engines. Currently, he serves as Engineering Director for the Office of the CTO, Google Cloud. Previously, he served as Google Warsaw Site leader, doubling the size of the engineering site. So far, Antonio has been lucky enough to gain professional experience in 4 countries in Europe and has managed teams in 6 countries in EMEA and the US: in Amsterdam, as Vice President for Elsevier, a leading scientific publisher; in London, as Engineering Site Lead for Microsoft working on Bing Search as CTO for Ask.com; and in several co-funded start-ups including one of the first web search companies in Europe. Antonio has co-invented a number of technologies for search, smart energy, the environment, and AI, with 20+ patents issued/applied, and he has published several books about coding and machine learning, also translated into Japanese and Chinese. Antonio speaks Spanish, English, and Italian, and he is currently learning Polish and French. Antonio is a proud father of 2 boys, Lorenzo, 18, and Leonardo, 13, and a little queen, Aurora, 9.

I want to thank my kids, Aurora, Leonardo, and Lorenzo, for motivating and supporting me during all the moments of my life. Special thanks to my parents, Elio and Maria, for being there when I need it. I'm particularly grateful to the important people in my life: Eric, Francesco, Antonello, Antonella, Ettore, Emanuela, Laura, Magda, and Nina. I want to thank all my colleagues at Google for their encouragement in writing this and previous books, for the precious time we've spent together, and for their advice: Behshad, Wieland, Andrei, Brad, Eyal, Becky, Rachel, Emanuel, Chris, Eva, Fabio, Jerzy, David, Dawid, Piotr, Alan, and many others. I'm especially appreciative of all my colleagues at OCTO, at the Office of the CTO at Google, and I'm humbled to be part of a formidable and very talented team. Thanks, Jonathan and Will. Thanks to my high school friends and professors who inspired me over many years (D'africa and Ferragina in particular). Thanks to the reviewer for their thoughtful comments and efforts toward improving this book, and my co-authors for their passion and energy. This book has been written in six different nations: Warsaw, Charlotte Bar; Amsterdam, Cafe de Jaren; Pisa, La Petite; Pisa, Caffe i Miracoli; Lucca, Piazza Anfiteatro, Tosco; London, Said; London, Nespresso, and Paris, Laduree. Lots of travel and lots of good coffee in a united Europe!

Amita Kapoor is an associate professor in the Department of Electronics, SRCASW, University of Delhi, and has been actively teaching neural networks and artificial intelligence for the last 20 years. Coding and teaching are her two passions, and she enjoys solving challenging problems. She is a recipient of the DAAD Sandwich fellowship 2008, and the Best Presentation Award at an international conference, Photonics 2008. She is an avid reader and learner. She has co-authored books on Deep Learning and has more than 50 publications in international journals and conferences. Her present research areas include machine learning, deep reinforcement learning, quantum computers, and robotics. To my grandmother the late Kailashwati Maini for her unconditional love and affection; and my grandmother the late Kesar Kapoor for her marvelous stories that fueled my imagination; my mother, the late Swarnlata Kapoor, for having trust in my abilities and dreaming for me; and my stepmother, the late Anjali Kapoor, for teaching me every struggle can be a stepping stone. I am grateful to my teachers throughout life, who inspired me, encouraged me, and most importantly taught me: Prof. Parogmna Sen, Prof. Wolfgang Freude, Prof. Enakshi Khullar Sharma, Dr. S Lakshmi Devi, Dr. Rashmi Saxena and Dr. Rekha Gupta. I am extremely thankful to the entire Packt team for the work and effort they put in since the inception of this book, the reviewers who painstakingly went through the content and verified the codes; their comments and suggestions helped improve the book. I am particularly thankful to my coauthors Antonio Gulli and Sujit Pal for sharing their vast experience with me in the writing of this book. I would like to thank my college administration, governing body and Principal Dr. Payal Mago for sanctioning my Sabbatical leave so that I can concentrate on the book. I would also like to thank my colleagues for the support and encouragement they have provided, with a special mention of Dr. Punita Saxena, Dr. Jasjeet Kaur, Dr. Ratnesh Saxena, Dr. Daya Bhardwaj, Dr. Sneha Kabra, Dr. Sadhna Jain, Mr. Projes Roy, Ms. Venika Gupta and Ms. Preeti Singhal. I want to thank my family members and friends my extended family Krishna Maini, Suraksha Maini, the late HCD Maini, Rita Maini, Nirjara Jain, Geetika Jain, Rashmi Singh and my father Anil Mohan Kapoor. And last but not the least I would like to thank Narotam Singh for his invaluable discussions, inspiration and unconditional support through all phases of my life. A part of the royalties of the book will go to smilefoundation.org.

Sujit Pal is a Technology Research Director at Elsevier Labs, an advanced

technology group within the Reed-Elsevier Group of companies. His areas of interest include Semantic Search, Natural Language Processing, Machine Learning, and Deep Learning. At Elsevier, he has worked on several machine learning initiatives involving large image and text corpora, and other initiatives around recommendation systems and knowledge graph development. He has previously co-authored another book on Deep Learning with Antonio Gulli and writes about technology on his blog Salmon Run. I would like to thank both my co-authors for their support and for making this authoring experience a productive and pleasant one, the editorial team at Packt who were constantly there for us with constructive help and support, and my family for their patience. It has truly taken a village, and this book would not have been possible without the passion and hard work from everyone on the team.

About the reviewers Haesun Park is a machine learning Google Developer Expert. He has been a

software engineer for more than 15 years. He has written and translated several books on machine learning. He is an entrepreneur, and currently runs his own business. Other books Haesun has worked on include the translation of Hands-On Machine Learning with Scikit-Learn and TensorFlow, Python Machine Learning, and Deep Learning with Python. I would like to thank Suresh Jain who proposed this work to me, and extend my sincere gratitude to Janice Gonsalves, who provided me with a great deal of support in the undertaking of reviewing this book.

Dr. Simeon Bamford has a background in AI. He is specialized in neural and neuromorphic engineering, including neural prosthetics, mixed-signal CMOS design for spike-based learning, and machine vision with event-based sensors. He has used TensorFlow for natural language processing and has experience in deploying TensorFlow models on serverless cloud platforms.

Table of Contents Preface Chapter 1: Neural Network Foundations with TensorFlow 2.0

xi 1

What is TensorFlow (TF)? What is Keras? What are the most important changes in TensorFlow 2.0? Introduction to neural networks Perceptron A first example of TensorFlow 2.0 code Multi-layer perceptron – our first example of a network Problems in training the perceptron and their solutions Activation function – sigmoid Activation function – tanh Activation function – ReLU Two additional activation functions – ELU and LeakyReLU Activation functions In short – what are neural networks after all? A real example – recognizing handwritten digits One-hot encoding (OHE) Defining a simple neural network in TensorFlow 2.0 Running a simple TensorFlow 2.0 net and establishing a baseline Improving the simple net in TensorFlow 2.0 with hidden layers Further improving the simple net in TensorFlow with Dropout Testing different optimizers in TensorFlow 2.0 Increasing the number of epochs Controlling the optimizer learning rate Increasing the number of internal hidden neurons

1 3 3 5 6 7 8 9 10 10 11 12 13 13 14 14 15 20 21 24 26 32 33 34

[i]

Table of Contents

Increasing the size of batch computation Summarizing experiments run for recognizing handwritten charts Regularization Adopting regularization to avoid overfitting Understanding BatchNormalization Playing with Google Colab – CPUs, GPUs, and TPUs Sentiment analysis Hyperparameter tuning and AutoML Predicting output A practical overview of backpropagation What have we learned so far? Towards a deep learning approach References

Chapter 2: TensorFlow 1.x and 2.x Understanding TensorFlow 1.x TensorFlow 1.x computational graph program structure Computational graphs

35 36 36 36 38 39 42 45 45 46 48 48 49

51 51 51 52

Working with constants, variables, and placeholders Examples of operations Constants Sequences Random tensors Variables

54 55 55 56 56 57

An example of TensorFlow 1.x in TensorFlow 2.x Understanding TensorFlow 2.x Eager execution AutoGraph Keras APIs – three programming models Sequential API Functional API Model subclassing

59 60 60 61 63 63 64 66

Callbacks Saving a model and weights Training from tf.data.datasets tf.keras or Estimators? Ragged tensors Custom training Distributed training in TensorFlow 2.x

67 68 69 72 74 74 76

Multiple GPUs MultiWorkerMirroredStrategy TPUStrategy ParameterServerStrategy

76 78 78 78

Changes in namespaces

79 [ ii ]

Table of Contents

Converting from 1.x to 2.x Using TensorFlow 2.x effectively The TensorFlow 2.x ecosystem Language bindings Keras or tf.keras? Summary

80 80 81 82 83 84

Chapter 3: Regression

87

What is regression? Prediction using linear regression Simple linear regression Multiple linear regression Multivariate linear regression TensorFlow Estimators Feature columns Input functions MNIST using TensorFlow Estimator API Predicting house price using linear regression Classification tasks and decision boundaries Logistic regression Logistic regression on the MNIST dataset Summary References

87 88 89 93 93 94 94 95 95 97 101 102 103 107 108

Chapter 4: Convolutional Neural Networks

109

Deep Convolutional Neural Network (DCNN) Local receptive fields Shared weights and bias A mathematical example Convnets in TensorFlow 2.x Pooling layers

110 110 111 111 112 113

Max pooling Average pooling ConvNets summary

113 113 113

An example of DCNN ‒ LeNet LeNet code in TensorFlow 2.0 Understanding the power of deep learning Recognizing CIFAR-10 images with deep learning Improving the CIFAR-10 performance with a deeper network Improving the CIFAR-10 performance with data augmentation Predicting with CIFAR-10 Very deep convolutional networks for large-scale image recognition Recognizing cats with a VGG16 Net [ iii ]

114 114 121 122 125 128 130 132 134

Table of Contents

Utilizing tf.keras built-in VGG16 Net module Recycling prebuilt deep learning models for extracting features Summary References

Chapter 5: Advanced Convolutional Neural Networks Computer vision Composing CNNs for complex tasks Classification and localization Semantic segmentation Object detection Instance segmentation

135 136 137 138

139 139 139 140 141 142 145

Classifying Fashion-MNIST with a tf.keras - estimator model Run Fashion-MNIST the tf.keras - estimator model on GPUs Deep Inception-v3 Net used for transfer learning Transfer learning for classifying horses and humans Application Zoos with tf.keras and TensorFlow Hub Keras applications TensorFlow Hub

147 150 151 154 157 158 158

Other CNN architectures

159

AlexNet Residual networks HighwayNets and DenseNets Xception

159 159 160 160

Answering questions about images (VQA) Style transfer Content distance Style distance

162 165 166 167

Creating a DeepDream network Inspecting what a network has learned Video Classifying videos with pretrained nets in six different ways Textual documents Using a CNN for sentiment analysis Audio and music Dilated ConvNets, WaveNet, and NSynth A summary of convolution operations Basic convolutional neural networks (CNN or ConvNet) Dilated convolution Transposed convolution

168 172 173 173 174 175 178 178 183 183 184 184

Separable convolution Depthwise convolution Depthwise separable convolution Capsule networks

184 185 185 185 [ iv ]

Table of Contents

So what is the problem with CNNs? So what is new with Capsule networks? Summary References

Chapter 6: Generative Adversarial Networks What is a GAN? MNIST using GAN in TensorFlow Deep convolutional GAN (DCGAN) DCGAN for MNIST digits Some interesting GAN architectures SRGAN CycleGAN InfoGAN Cool applications of GANs CycleGAN in TensorFlow 2.0 Summary References

Chapter 7: Word Embeddings

185 186 188 188

191 191 193 198 200 209 209 210 212 214 218 228 228

231

Word embedding ‒ origins and fundamentals Distributed representations Static embeddings Word2Vec GloVe Creating your own embedding using gensim Exploring the embedding space with gensim Using word embeddings for spam detection Getting the data Making the data ready for use Building the embedding matrix Define the spam classifier Train and evaluate the model Running the spam detector Neural embeddings – not just for words Item2Vec node2vec Character and subword embeddings Dynamic embeddings Sentence and paragraph embeddings Language model-based embeddings Using BERT as a feature extractor [v]

231 233 234 235 238 239 240 243 244 245 247 248 250 251 252 253 253 259 260 262 264 267

Table of Contents

Fine-tuning BERT Classifying with BERT ‒ command line Using BERT as part of your own network Summary References

269 270 271 275 275

Chapter 8: Recurrent Neural Networks

279

The basic RNN cell Backpropagation through time (BPTT) Vanishing and exploding gradients RNN cell variants Long short-term memory (LSTM) Gated recurrent unit (GRU) Peephole LSTM RNN variants Bidirectional RNNs Stateful RNNs RNN topologies Example ‒ One-to-Many – learning to generate text Example ‒ Many-to-One – Sentiment Analysis Example ‒ Many-to-Many – POS tagging Encoder-Decoder architecture – seq2seq Example ‒ seq2seq without attention for machine translation Attention mechanism Example ‒ seq2seq with attention for machine translation Transformer architecture Summary References

Chapter 9: Autoencoders

280 283 284 285 285 288 288 289 289 290 291 292 300 307 316 318 328 330 336 340 340

345

Introduction to autoencoders Vanilla autoencoders TensorFlow Keras layers ‒ defining custom layers Reconstructing handwritten digits using an autoencoder Sparse autoencoder Denoising autoencoders Clearing images using a Denoising autoencoder Stacked autoencoder Convolutional autoencoder for removing noise from images Keras autoencoder example ‒ sentence vectors Summary References [ vi ]

345 347 348 350 354 356 357 360 360 365 373 374

Table of Contents

Chapter 10: Unsupervised Learning Principal component analysis PCA on the MNIST dataset TensorFlow Embedding API K-means clustering K-means in TensorFlow 2.0 Variations in k-means Self-organizing maps Colour mapping using SOM Restricted Boltzmann machines Reconstructing images using RBM Deep belief networks Variational Autoencoders Summary References

375 375 376 379 380 381 384 384 387 392 393 397 399 404 405

Chapter 11: Reinforcement Learning Introduction RL lingo Deep reinforcement learning algorithms Reinforcement success in recent years Introduction to OpenAI Gym Random agent playing Breakout Deep Q-Networks DQN for CartPole DQN to play a game of Atari DQN variants Double DQN Dueling DQN Rainbow

407 407 409 411 414 415 418 420 422 427 430 430 431 434

Deep deterministic policy gradient Summary References

434 436 436

Chapter 12: TensorFlow and Cloud

439

Deep learning on cloud Microsoft Azure Amazon Web Services (AWS) Google Cloud Platform (GCP) IBM Cloud Virtual machines on cloud EC2 on Amazon Compute Instance on GCP

439 440 442 444 447 447 448 450 [ vii ]

Deep Learning with TensorFlow 2 and Keras Second Edition Deep Learning with TensorFlow 2 and Keras,  Second Edition teaches neural networks  and deep learning techniques alongside TensorFlow (TF) and Keras. You'll learn how to write deep learning applications

What you will learn:



Build machine learning and deep learning systems with TensorFlow 2 and the Keras API



Use Regression analysis, the most popular approach to machine learning



Understand ConvNets (convolutional neural networks) and how they are essential for deep learning systems such as image classifiers



Use GANs (generative adversarial networks) to create new data that fits with existing patterns



Discover RNNs (recurrent neural networks) that can process sequences of input intelligently, using one part of a sequence to correctly interpret another



Apply deep learning to natural human language and interpret natural language texts to produce an appropriate response



Train your models on the cloud and put TF to work in real environments



Explore how Google tools can automate simple ML workflows without the need for complex modeling

in the most powerful, popular, and scalable machine learning stack available.

TensorFlow is the machine learning library of choice for professional applications, while Keras offers a simple and powerful Python API for accessing TensorFlow. TensorFlow 2 provides full Keras integration, making advanced machine learning easier and more convenient than ever before.

This book also introduces neural networks with TensorFlow, runs through the main applications (regression, ConvNets (CNNs), GANs, RNNs, NLP), covers two working example apps, and then dives into TF in production, TF mobile, and using TensorFlow with AutoML.

www.packtpub.com

FOR SALE IN INDIA ONLY