I will be taking deeplearning.ai’s 5-course Deep Learning Specialization on Coursera. Here are my notes from week 1 (Introduction to deep learning) of the first course (Neural Networks and Deep Learning).

1. Neural Nets are Particularly Useful in Supervised Learning Problems

  • Standard Neural nets: Real estate, online advertising (selecting best ad to show user based on prediction regarding whether they will click on ad or not).
  • Convolutional Neural Nets (CNNs): Image recognition.
  • Recurrent Neural Nets (RNNs): Audio processing (speech recognition) and text translation.
  • Hybrid: Autonomous driving.

2. NNs Work Well with Structured and Unstructured Data

  • Structured data example: Data arranged neatly such as in a Pandas DataFrame.
  • Unstructured data examples: Audio, image, text.
  • Unstructured data has traditionally been harder for computers to understand which is why the NN improvements in this area have been so ground-breaking.

3. Scale (in Data and Computational Power) Drives NN Performance

  • At small scale, the most important factor is manipulating the data (with feature engineering, etc.). At small scale, traditional machine learning algorithms perform comparably well to large neural nets.
  • At large scale, large neural nets with large amounts of data will perform the best.
  • Increasing the amount of training data usually does not harm the performance of the algorithm.

The code below generates a replica of the graph which Andrew Ng drew to illustrate neural net performance as it compares to data size and neural net size.

In [1]:
# Import libraries
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

#Create x linspace
x = np.linspace(1,50,990)

#Create dummy functions to represent performance
trad_alg = np.log(x)
small_nn = np.log(x*np.linspace(1,3,990))
medium_nn = np.log(x*np.linspace(1,6,990))
large_nn = np.log(x*np.linspace(1,10,990))*np.linspace(1,1.3,990)

#Plot dummy functions
fig = plt.figure(figsize=(10,6))
g = sns.regplot(x, trad_alg, fit_reg=False,label='Traditional Algorithms')
sns.regplot(x, small_nn, fit_reg=False,label='Small NN')
sns.regplot(x, medium_nn, fit_reg=False,label='Medium NN')
sns.regplot(x, large_nn, fit_reg=False,label='Large NN')


#Set labels and remove numbers for axes
plt.xlabel('Amount of labeled data (m)')
plt.ylabel('Performance')
plt.legend()
g.set(yticks=[])
g.set(xticks=[]);

#Add annotation
g.text(0, -2, "|- - With small amounts of data - -|\n   NN complexity does not \n   greatly corresepond to \n   improved performance", fontsize = 12)

#Create title
plt.title('Large Neural Nets with Large Training Sets have the Best Performance');

4. Improving Training Speed is Essential Given the Iterative Process of Developing with NNs:

  • Neural networks are developed in an iterative fashion. This fact, combined with the need for large neural nets trained on massive data sets mean that improvements in training speed allows for faster innovation with NNs.
  • Algorithmic improvments have helped improve training speed. Developments such as using the ReLU function for activation instead of the sigmoid function have improved training speed which allow for faster iterations of implementing NNs.
  • Depending on the scale of the application, training can take a wide range of time from 10 minutes to an entire month.
Categories: Deep Learning

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Deep Learning

Activation Functions

Deep Learning activation functions examined below: ReLU Leaky ReLU sigmoid tanh Activation plotting pleminaries In [1]: import matplotlib.pyplot as plt import numpy as np %matplotlib inline #Create array of possible z values z = np.linspace(-5,5,num=1000) def Read more…

Deep Learning

DeepLearning.AI: Course 1 – Week 2 Lecture Notes

I have recently started DeepLearning.AI’s Deep Learning Specialization on Coursera. Below are my lecture notes from the second week of the first course. The lectures examined vectorized Logistic regression as a neural network in preparation Read more…