**THE NEURAL NETWORK:**

1.building intelligent machines

2.the limits of traditional computer programs

3.the mechanics of machine learning

4.the neuron

5.expressing linear perceptrons as neurons

6.feed-forward neural networks

7.linear neurons and their limitations

8.sigmoid,tanh,and ReLU neurons

9.softmax output layers

10.looking forward

**TRAINING FEED-FORWARD NEURAL NETWORKS:**

1.the fast-food problem

2.gradient descent

3.the delta rule and learning rates

4.gradient descent with sigmoidal neurons

5.the backpropagation algorithm

6.stochastic and minibatch gradient descent

7.test sets,validation sets,and overfitting

8.preventing overfitting in deep neural networks

9.summary

**IMPLEMENTING NEURAL NETWORKS IN TENSORFLOW:**

1.what is tensorflow?

2.how does tensorflow compare to alternatives?

3.installing tensorflow

4.creating and manipulating tensorflow variables

5.tensorflow operations

6.placeholder tensors

7.sessions in tensorflow

8.navigating variable scopes and sharing variables

9.managing models over the CPU and GPU

10.specifying the logistic regression model in tensorflow

11.logging and training the logistic regression model

12.leveraging tensor board to visualize computation graphs and learning

13.building a multilayer model for MNIST in tensorflow

14.summary

**BEYOND GRADIENT DESCENT:**

1.the challenges with gradient descent

2.local minima in the error surface to deep networks

3.model identifiablity

4.how pesky are spurious lical minima in deep networks ?

5.falt regions in the error surface

6.when the gradient points in the wrong directions

7.momentum -based optimization

8.A brief view of second- order methods

9.learning rate adptation

10.Adagrad-accumulating historical gradients

11.RMSprop-exponentially weighted moving average of gradients

12.Adam-combining momentum and RMSprop

13. the philosophy behind optimizer selection

14.summary

**CONVOLUTIONAL NEURAL NETWORKS:**

1.neurons in human vision

2.the shortcomings of feature selection

3.vanilla deep neural networks don’t scale

4.filters and feature maps

5.full description of the convolutional layer

6.max pooling

7.full architectural description of convolutional networks

8.closing the loops on MNIST with convolutional networks

9.image preprocessing pipelines enble more robust models

10.accelerating training with batch noramalization

11.building a convolutional networks for CIFAR -10

12.visualizing learning in convolutional networks

13.leveraging convolutional filters to replicate artistic styles

14.learning convolutional filters for other problem domains

15.summary

**EMBEDDING AND REPRESENTATION LEARNING:**

1.learning lower -dimensional representations

2.principal component analysis

3.motivating the autoencoder architecture

4.implementing an autoencoder in tensorflow

5.denoising to force robust representations

6.sparsity in autoencoders

7.when context is more informative than the input vector

8.the world2 vec framework

9.implementing the skip -gram architecture

10.summary

**MODELS FOR SEQUENCE ANALYSIS**

1.analysing variable -lenghth inputs

2.tackling seq2seq with neural N-grams

3.implementing a part-of-speech tagger

4.dependency parsing and syntaxnet

5.beam search and global normalization

6.A case for stateful deeo learning models

7.recurrent neural networks

8.the challenges with vanishing gradients

9.log short-term memory units

10.tensorflow primitives for RNN models

11.solving seq2seq tasks with recurrent neural networks

12.augmenting recurrent networks with attention

13.dissecting a neural translation network

14.summary

**MEMORY AUGMENTED NEURAL NETWORKS:**

1.neural turing machines

2.attention -based memory access

3.NTM memory addressing machanism

4.differentiable neural computers

5.interference -free writing in DNCs

6.DNC memory reuse

7.temporal linking of DNC writes

8.understanding the DNC read head

9.the DNC controller network

10.visualizing the DNC in action

11.implementing the DNC in tensorflow

12.teaching a DNC to read and comprehead

13.summary

**DEEP REINFORCEMENT LEARNING:**

1.deep reinforcement learning masters atari games

2.what is reinforcement learning ?

3.markov desicion processes (MDP)

4.policy

5.future return

6.discounted future return

7.explore versus exploit

8.policy versus value learning

9.policy learning via policy gradients

10.pole-cart with policy gradients

11.opem AI gym

12.creating an agent

13.building the model and optimizer

14.sampling actions

15.keeping track of history

16.policyb gradient main function

17.PG agent performance on pole-cart

18.Q-learning and deep Q-networks

19.the bellman equation

20.issues with value iteration

21.approximating the Q- function

22.deep Q -network(DQN)

23.training DQN

24.learning stability

25.target Q-network

26.experience replay

27.from Q-function to policy

28.DQN and the markov assumption

29.DQN’s solution to the markov assumption

30.playing breakout with DQN

31.building our architecture

32.stacking frames

33.setting up training operations

34.updating our target Q-network

35.implementing experience replay

36.DQN main loop

37.DQN agent results on breakout

38.improving and moving beyond DQN

39.deep recurrent Q-networks (DRQN)

40.asynchronous advantage actor-critic agent(A3C)

41.UNsupervised REinforcement and auxiliary learning (UN REAL)

42.summary