Key Features

  • Gain skills and competencies required in Industry by Experts.
  • Work on Real-time Projects depending upon the course you select.
  • Students work in a professional corporate environment.
  • Get a globally recognized Certificate form WebTek with our partner logos.
  • Global Brand recognition for Placements.

Deep Learning

Course Objective:

After completion of this course , Student will get a brief Idea about Deep Learning Technique.


Experience in Python ,Understanding of Calculus and Basic Conception of Machine Learning is highly required

Course Content:

The Neural Network
  • Building Intelligent Machines
  • The Limits of Traditional Computer Programs
  • The Mechanics of Machine Learning
  • The Neuron
  • Expressing Linear Perceptrons as Neurons
  • Feed-Forward Neural Networks
  • Linear Neurons and Their Limitations
  • Sigmoid, Tanh, and ReLU Neurons
  • Softmax Output Layers
Training Feed-Forward Neural Networks
  • The Fast-Food Problem
  • Gradient Descent
  • The Delta Rule and Learning Rates
  • Gradient Descent with Sigmoidal Neurons
  • The Backpropagation Algorithm
  • Stochastic and Minibatch Gradient Descent
Implementing Neural Networks in TensorFlow
  • What Is TensorFlow
  • How Does TensorFlow Compare to Alternatives
  • Creating and Manipulating TensorFlow Variables
  • TensorFlow Operations
  • Placeholder Tensors
  • Sessions in TensorFlow
  • Navigating Variable Scopes and Sharing Variables
  • Managing Models over the CPU and GPU
  • Specifying the Logistic Regression Model in TensorFlow
  • Logging and Training the Logistic Regression Model
  • Leveraging TensorBoard to Visualize Computation Graphs and Learning
  • Building a Multilayer Model for MNIST in TensorFlow
Beyond Gradient Descent
  • The Challenges with Gradient Descent
  • Local Minima in the Error Surfaces of Deep Networks
  • Model Identifiability
  • How Pesky Are Spurious Local Minima in Deep Networks?
  • Flat Regions in the Error Surface
  • When the Gradient Points in the Wrong Direction
  • Momentum-Based Optimization
  • A Brief View of Second-Order Methods
  • Learning Rate Adaptation
  • AdaGrad—Accumulating Historical Gradients
  • RMSProp—Exponentially Weighted Moving Average of Gradients
  • Adam—Combining Momentum and RMSProp
  • The Philosophy Behind Optimizer Selection
Convolutional Neural Networks
  • Neurons in Human Vision
  • The Shortcomings of Feature Selection
  • Vanilla Deep Neural Networks Don’t Scale
  • Filters and Feature Maps
  • Full Description of the Convolutional Layer
  • Max Pooling
  • Full Architectural Description of Convolution Networks
  • Closing the Loop on MNIST with Convolutional Networks
  • Image Preprocessing Pipelines Enable More Robust Models
  • Accelerating Training with Batch Normalization
  • Building a Convolutional Network for CIFAR-10
Embedding and Representation Learning
  • Learning Lower-Dimensional Representations
  • Principal Component Analysis
  • Motivating the Autoencoder Architecture
  • Implementing an Autoencoder in TensorFlow
  • Denoising to Force Robust Representations
  • Sparsity in Autoencoders
  • When Context Is More Informative than the Input Vector
  • The Word2Vec Framework
  • Implementing the Skip-Gram Architecture
Models for Sequence Analysis
  • Analyzing Variable-Length Inputs
  • Tackling seq2seq with Neural N-Grams
  • Implementing a Part-of-Speech Tagger
  • Dependency Parsing and SyntaxNet
  • Beam Search and Global Normalization
  • A Case for Stateful Deep Learning Models
  • Recurrent Neural Networks
  • The Challenges with Vanishing Gradients
  • Long Short-Term Memory (LSTM) Units
  • TensorFlow Primitives for RNN Models
  • Implementing a Sentiment Analysis Model
  • Solving seq2seq Tasks with Recurrent Neural Networks
  • Augmenting Recurrent Networks with Attention
  • Dissecting a Neural Translation Network
Doubt session & Revision

Doubt session & Revision

Q & A

learn deep learning


  • Course Duration: 4 – 6 Weeks
  • Suitable For: 2nd/ 3rd / 4th Yr B.Tech. / Diploma students
Copyright © 2019