Introduction of PyTorch

आपने TensorFlow और Keras सीख लिया — अब बारी है PyTorch की, जो research और flexibility के लिए Deep Learning community में सबसे लोकप्रिय framework है।

PyTorch एक open-source deep learning framework है जिसे Facebook AI Research Lab (FAIR) ने 2016 में विकसित किया।

यह framework खासकर researchers और advanced developers के बीच पसंदीदा है, क्योंकि यह:

  • Dynamic computation graph प्रदान करता है (runtime पर बदलता है)
  • Pythonic और NumPy जैसा syntax देता है
  • GPU acceleration आसानी से करता है

🔑 Features:

FeatureDescription
🧮 Dynamic GraphsReal-time control (ज़्यादा flexibility)
📊 Tensor LibraryNumPy जैसे operations with GPU support
🧠 AutogradAutomatic gradient calculation
🔧 Modular APINeural nets = Modules
🖥️ GPU ReadyCUDA support

🔶 2. PyTorch Installation

pip install torch torchvision

🔷 3. Tensors in PyTorch

Tensors are multi-dimensional arrays (similar to NumPy arrays) but with GPU support.

import torch

x = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
print(x.shape) # torch.Size([2, 2])
print(x + x) # Tensor addition
print(x @ x.T) # Matrix multiplication

✅ Use GPU:

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
x = x.to(device)

🔷 4. Autograd – Automatic Differentiation

x = torch.tensor(2.0, requires_grad=True)
y = x**3 + 2*x
y.backward()
print(x.grad) # dy/dx = 3x^2 + 2 = 3*2^2 + 2 = 14

🔷 5. Building a Simple Neural Network

🔨 Step 1: Import Libraries

🔨 Step 1: Import Libraries

import torch
import torch.nn as nn
import torch.optim as optim

🔨 Input and Output for XOR

X = torch.tensor([[0.,0.],[0.,1.],[1.,0.],[1.,1.]])
y = torch.tensor([[0.],[1.],[1.],[0.]])

🔨 Step 2: Define Model


class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.fc1 = nn.Linear(2, 4)
self.fc2 = nn.Linear(4, 1)

def forward(self, x):
x = torch.relu(self.fc1(x))
return torch.sigmoid(self.fc2(x))

🔨 Step 3: Instantiate Model

model = MyNet()

🔨 Step 4: Define Loss and Optimizer

criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

🔨 Step 5: Train the Model

for epoch in range(100):
y_pred = model(X)
loss = criterion(y_pred, y)

optimizer.zero_grad()
loss.backward()
optimizer.step()

print(f"Epoch {epoch}, Loss: {loss.item()}")

🔧 Important PyTorch Modules

ModuleDescription
torch.TensorMain data structure
torch.nnFor building neural nets
torch.nn.functionalActivation functions, loss functions
torch.optimOptimizers like Adam, SGD
torch.utils.dataDataset and DataLoader tools
torchvisionImage datasets and transformations

🔷 6. Example: XOR with MLP

X = torch.tensor([[0.,0.],[0.,1.],[1.,0.],[1.,1.]])
y = torch.tensor([[0.],[1.],[1.],[0.]])

model = nn.Sequential(
nn.Linear(2, 4),
nn.ReLU(),
nn.Linear(4, 1),
nn.Sigmoid()
)

loss_fn = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)

for epoch in range(2000):
y_pred = model(X)
loss = loss_fn(y_pred, y)

optimizer.zero_grad()
loss.backward()
optimizer.step()

if epoch % 200 == 0:
print(f"Epoch {epoch}, Loss: {loss.item()}")

📝 Practice Questions:

  1. PyTorch और TensorFlow में मुख्य अंतर क्या है?
  2. PyTorch में tensor कैसे बनाते हैं?
  3. Autograd का उपयोग gradient के लिए कैसे करते हैं?
  4. एक simple model class कैसे बनाते हैं?
  5. Sequential API और custom model class में क्या फर्क है?

🧠 Summary Table

ConceptExplanation
TensorPyTorch का data container (NumPy + GPU)
AutogradAutomatic differentiation
nn.ModuleNeural network architecture का base class
OptimizerParameters को update करता है
Loss FunctionModel की गलती measure करता है