Logic Gates (AND, OR, NOT, XOR, etc.)
Foundation of computation—basic building blocks of digital circuits.
Perform simple boolean operations.
Example: AND gate outputs 1 only if both inputs are 1.
Perceptron (Single-layer Neural Network)
The simplest type of artificial neuron, inspired by biological neurons.
Can mimic logic gates using weights and bias.
Activation function: Step function (e.g., outputs 0 or 1).
Limitation: Cannot solve the XOR problem (i.e., non-linearly separable problems).
y=f(W⋅X+b)
W = weights,
X = input,
b = bias,
f = activation function.
Artificial Neural Network (ANN) (Multi-layer Perceptron – MLP)
Fixes XOR problem by introducing hidden layers.
Uses non-linear activation functions (e.g., ReLU, Sigmoid).
Multiple perceptrons stacked together.
Still struggles with deep learning tasks.
Algorithm (Training ANNs)
Introduced to update weights efficiently using gradient descent.
Error is propagated backward from output to input.
Uses partial derivatives to minimize loss.
🔹 Steps:
Forward pass: Compute output.
Loss calculation: Compare output with actual value.
Backward pass: Adjust weights using gradient descent.
Repeat until convergence.
Convolutional Neural Networks (CNNs)
Designed for image processing and computer vision tasks.
Uses convolutional layers to detect patterns like edges, textures, etc.
Pooling layers reduce dimensionality, improving efficiency.
Example applications: Image Captioning, Object Detection, Face Recognition.
🔹 Key components:
Convolutional layers (Feature extraction)
Pooling layers (Downsampling)
Fully Connected layers (Classification)
Recurrent Neural Networks (RNNs)
Designed for sequential data like text, speech, and time series.
Maintains a memory of previous inputs using loops.
Common problem: Vanishing gradient (solved by LSTM & GRU).
Example applications: Text Generation, Speech Recognition, Machine Translation.
🔹 Variants:
Vanilla RNN: Simple version, suffers from vanishing gradient.
LSTM (Long Short-Term Memory): Fixes vanishing gradient issue.
GRU (Gated Recurrent Unit): Similar to LSTM but computationally efficient.
Summary:
Logic Gates → Basis of computation.
Perceptron → Simple neuron that mimics logic gates.
ANN (MLP) → Multi-layer perceptron solves non-linear problems.
Backpropagation → Algorithm for training neural networks.
CNN → Best for images.
RNN → Best for sequential data.
Let me know if you need more details on any stage!