Table of Contents
What Is Deep Learning?
Deep Learning is Machine Learning Technique That Learn Features And Task Directly From Data. Data Can be Images, Text Or Sound. But deep Learning Concepts can be used other types of data. Deep Learning is often refer to end to end learning. DeepLearning big impact in areas such as computer vision and natural language processing.

In this example, have a set of images and I want to recognize which category of objects each image belongs to cars trucks or boats. start with a label set of images or training data the labels correspond to the desired outputs of the task. the deep learning algorithm needs these labels as they tell the algorithm about the specific features and objects in the image the deep learning algorithm. then learns how to classify input images into the desired categories we use the term end to end learning because the task is learned directly from data
Advantage Of Deep Learning
- Deep learning methods are now more accurate than people at classifying images.
- GPUs enable us should now train deep networks and less time.
- Large amounts of label data required for deep learning has become accessible over the deep learning methods like as neural network architectures.
Artificial Neural Networks
What is Artificial Neural Networks ?
Artificial Neural Networks is uses algorithms inspired by the structure and function of the brains neural networks as such the models used in deep learning are called artificial neural networks.
How Artificial Neural Networks work?
artificial neural networks are computing systems that are inspired by the brains neural networks. these networks are based on a collection of connected units called artificial neurons or simply just neurons each connection between these. neurons can transmit a signal from one neuron to another and the receiving neuron. then processes the signal and then signals downstream neurons connected to it. typically neurons are organized in layers different layers may perform different kinds of transformations on their inputs and signals essentially travel from the first layer called the input layer to the last layer called the output layer. any layers between the input and output layers are called hidden layers.

In This Above Image, first layer would be called the input layer. so these would all be our inputs they would then be transferred to a hidden layer here. now, this hidden layer has four different neurons. each one of these nodes here illustrates a neuron and them together within this column represent a layer since this layer is between the input and output layer it is a hidden layer. and then from this hidden layer these nodes transmit signals down to the nodes in the output layer.
# make keras model model = Sequential() model.add(Dense(12, input_dim=8, activation='relu')) #first hidden layer has 12 nodes and uses the relu activation function model.add(Dense(8, activation='relu')) #second hidden layer has 8 nodes and uses the relu activation function model.add(Dense(1, activation='sigmoid')) #output layer has one node and uses the sigmoid activation function # compile keras model #loss argument for cross entropy #binary_crossentropy For binary classification problems #adam is efficient stochastic gradient descent algorithm because automatically tunes itself for betterresults model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # fit model #epochs is hyperparameter, set no of times learn entire data. #batch_size is hyperparameter, no of sample process before model updated. model.fit(X, y, epochs=150, batch_size=10,verbose=0)
Full Code: Click Hear
Convolutional Neural Networks
What is Convolutional Neural Networks?
Convolutional Neural Networks is especially well-suited for working with image data. the term deep usually refers to the number of hidden layers in the neural network. while traditional neural networks only contain two or three hidden layers. this one popular type of deep neural network is known as a now days CNN.
How Convolutional Neural Networks Work?

CnN can have tens or hundreds of hidden layers they need to learn to detect different features in an image in this feature map. We can see that every hidden layer increases the complexity of the learned image features.for example the first hidden layer learns how to detect edges and the last learns how to detect more complex shapes. neural network the final layer connects every neuron from the last hidden layer to the output neurons.
CNN Works On three key concepts. Frist local receptive fields, second shared weights and biases and third is activation and pooling.
local receptive fields in a typical neural network each neuron in the input layer is Connected to a neuron in the hidden layer however in a CnN. Only a small region of input layer neurons connect to neurons in the hidden layer. These regions are referred to as local receptive fields. The local Receptive field is translated across an image to create a feature map from the input layer to the hidden layer neurons. You can use convolution to implement this process efficiently. That’s why it is called a convolutional Neural Network.

shared weights and biases in The model learns these values during the training process and it continuously updates them with each new training. Cnn the weights and bias values are the same for all Hidden neurons in a given layer. this means that all hidden neurons are detecting the same feature. Such as an edge or a blob in different regions of the image. this makes the Network tolerant to translation of objects in an image.

activation : the activation Step apply the transformation to the output of each neuron by using activation functions. Rectified linear unit (relu) is an example of a commonly used activation function. it takes the output of a neuron and maps it to the highest positive value or If the output is negative the function maps it to zero

Pooling reduces the dimensionality of the features map by condensing the output of small regions of neurons into a single output
This helps simplify the following layers and reduces the number of parameters that the model needs to learn now. Let’s pull it all together
#define variable batch_size = 64 epochs = 20 num_classes = 10 fashion_model = Sequential() #Conv2D() define as Convolutional Neural Networks fashion_model.add(Conv2D(32, kernel_size=(3, 3),activation='linear',input_shape=(28,28,1),padding='same')) fashion_model.add(LeakyReLU(alpha=0.1)) fashion_model.add(MaxPooling2D((2, 2),padding='same')) fashion_model.add(Conv2D(64, (3, 3), activation='linear',padding='same')) fashion_model.add(LeakyReLU(alpha=0.1)) fashion_model.add(MaxPooling2D(pool_size=(2, 2),padding='same')) fashion_model.add(Conv2D(128, (3, 3), activation='linear',padding='same')) fashion_model.add(LeakyReLU(alpha=0.1)) fashion_model.add(MaxPooling2D(pool_size=(2, 2),padding='same')) fashion_model.add(Flatten()) fashion_model.add(Dense(128, activation='linear')) fashion_model.add(LeakyReLU(alpha=0.1)) fashion_model.add(Dense(num_classes, activation='softmax')) #Compile the Model fashion_model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
Full Code: Click Hear
Final OutCome
finally deep learning is a type of machine learning inspired by the structure of the human brain in terms of deep learning this structure is called neural network.