#1 Basics
🔥 PyTorch Hands-On Course for Beginners
10 practical modules with code, exercises, and assignments.
Module 1: PyTorch Basics
Explanation:
This module introduces PyTorch tensors, the fundamental data structure used to store data in PyTorch.
You learn how to create tensors, inspect their shape, and understand data types.
Learning Outcomes:
- Understand what a tensor is
- Create tensors using PyTorch
- Inspect tensor shape and data type
Create your first tensor.
import torch
x = torch.tensor([1, 2, 3])
print(x)
print(x.dtype)
print(x.shape)
Exercises
- Create a tensor with values [10, 20, 30]
- Print tensor shape and data type
- Create a 2×2 zero tensor
Assignment
Create a tensor representing marks of 5 students and compute the average.
MCQ Quiz
- 1. What is the core data structure in PyTorch?
A. List
B. Array
C. Tensor
D. Matrix - 2. Which function creates a tensor?
A. torch.create()
B. torch.tensor()
C. torch.make()
D. torch.build() - 3. What does
x.shapereturn?
A. Data type
B. Dimensions of tensor
C. Memory size
D. Values - 4. PyTorch is mainly used for?
A. Web design
B. Deep learning
C. OS development
D. Gaming - 5. Which library supports GPU acceleration?
A. Pandas
B. NumPy
C. PyTorch
D. Matplotlib
Module 2: Tensor Operations
Explanation:
This module covers basic mathematical operations on tensors such as addition, multiplication,
and dot products using PyTorch.
Learning Outcomes:
- Perform arithmetic operations on tensors
- Apply element-wise operations
- Understand how tensors behave mathematically
import torch
a = torch.tensor([2, 4])
b = torch.tensor([1, 3])
print(a + b)
print(a * b)
print(torch.dot(a, b))
Exercises
- Subtract tensors
- Compute mean
Assignment
Simulate monthly expenses using tensors.
MCQ Quiz
- 1. Which operator performs element-wise addition?
A. +
B. dot()
C. sum()
D. mean() - 2. What does torch.dot() compute?
A. Element-wise product
B. Dot product
C. Matrix multiplication
D. Sum - 3. Which operation multiplies tensors element-wise?
A. torch.dot()
B. *
C. torch.mul_all()
D. add() - 4. Tensor operations are?
A. Scalar only
B. Slow
C. Vectorized
D. Manual - 5. Which function calculates average?
A. torch.sum()
B. torch.mean()
C. torch.avg()
D. torch.calc()
Module 3: Autograd
Explanation:
This module explains PyTorch’s autograd system, which automatically computes gradients
required for training neural networks.
Learning Outcomes:
- Understand automatic differentiation
- Compute gradients using backward()
- Explain the role of gradients in learning
import torch
x = torch.tensor(2.0, requires_grad=True)
y = x**3 + 4*x
y.backward()
print(x.grad)
Exercises
- Compute gradient of y = x² + 5x
- Change value of x and observe gradient
Assignment
Derive gradient for a cubic equation using autograd.
MCQ Quiz
- 1. What does autograd compute?
A. Values
B. Gradients
C. Shapes
D. Loss - 2. Which flag enables gradient tracking?
A. grad=True
B. requires_grad=True
C. track=True
D. backward=True - 3. Which method computes gradients?
A. grad()
B. backward()
C. step()
D. compute() - 4. Gradients are used in?
A. Evaluation
B. Optimization
C. Data loading
D. Saving - 5. Gradients are stored in?
A. x.value
B. x.grad
C. x.data
D. x.diff
Module 4: Linear Regression
Explanation:
In this module, you build a simple linear regression model using PyTorch’s built-in layers,
loss functions, and optimizers.
Learning Outcomes:
- Build a linear regression model
- Use loss functions and optimizers
- Train a model using gradient descent
import torch
x = torch.tensor([[1.],[2.],[3.]])
y = torch.tensor([[2.],[4.],[6.]])
model = torch.nn.Linear(1,1)
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for _ in range(200):
loss = loss_fn(model(x), y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(model.weight.item(), model.bias.item())
Exercises
- Change learning rate and observe loss
- Increase training epochs
Assignment
Train a model to predict salary based on years of experience.
MCQ Quiz
- 1. Linear regression predicts?
A. Categories
B. Continuous values
C. Images
D. Text - 2. Which loss is used for regression?
A. BCELoss
B. MSELoss
C. CrossEntropy
D. L1Loss - 3. Which optimizer updates weights?
A. Loss
B. Optimizer
C. Dataset
D. Model - 4. Learning rate controls?
A. Data size
B. Step size
C. Accuracy
D. Output - 5. Which layer is used?
A. nn.Linear
B. nn.Conv2d
C. nn.RNN
D. nn.Dropout
Module 5: Dataset & DataLoader
Explanation:
This module introduces Dataset and DataLoader classes to efficiently load data in batches
during training.
Learning Outcomes:
- Create a custom Dataset class
- Use DataLoader for batching
- Understand data iteration during training
from torch.utils.data import Dataset, DataLoader
import torch
class MyData(Dataset):
def __init__(self):
self.x = torch.arange(1,11).float()
self.y = self.x * 2
def __len__(self):
return len(self.x)
def __getitem__(self, i):
return self.x[i], self.y[i]
loader = DataLoader(MyData(), batch_size=2)
for x,y in loader:
print(x,y)
Exercises
- Change batch size to 5
- Add random noise to dataset
Assignment
Create a Dataset class for student marks and grades.
MCQ Quiz
- 1. Dataset class is used to?
A. Store data
B. Train model
C. Optimize loss
D. Save model - 2. DataLoader provides?
A. Prediction
B. Batching
C. Loss
D. Accuracy - 3. __getitem__ returns?
A. Length
B. One sample
C. All data
D. Batch size - 4. DataLoader improves?
A. Accuracy
B. Efficiency
C. Loss
D. Weights - 5. Batch size controls?
A. Epochs
B. Samples per iteration
C. Learning rate
D. Layers
Module 6: Neural Network
Explanation:
This module demonstrates how to construct a multi-layer neural network using linear layers
and activation functions.
Learning Outcomes:
- Understand neural network architecture
- Create hidden layers
- Apply activation functions
import torch.nn as nn
model = nn.Sequential(
nn.Linear(2,4),
nn.ReLU(),
nn.Linear(4,1)
)
print(model)
Exercises
- Add another hidden layer
- Change activation function to Tanh
Assignment
Design a neural network with 3 hidden layers.
MCQ Quiz
- 1. Neural networks consist of?
A. Layers
B. Tables
C. Loops
D. Files - 2. ReLU is a?
A. Loss
B. Activation function
C. Optimizer
D. Dataset - 3. Hidden layers increase?
A. Model capacity
B. Data size
C. Speed
D. Storage - 4. nn.Sequential is used to?
A. Stack layers
B. Train model
C. Load data
D. Save weights - 5. Output layer size depends on?
A. Problem type
B. Dataset size
C. Epochs
D. Optimizer
Module 7: Training Loop
Explanation:
This module explains the standard training loop used in PyTorch, including forward pass,
loss computation, backward pass, and parameter updates.
Learning Outcomes:
- Implement a complete training loop
- Understand model parameter updates
- Monitor training progress
import torch, torch.nn as nn
x = torch.randn(100,2)
y = x.sum(dim=1, keepdim=True)
model = nn.Linear(2,1)
opt = torch.optim.Adam(model.parameters(), lr=0.01)
loss_fn = nn.MSELoss()
for _ in range(100):
loss = loss_fn(model(x), y)
opt.zero_grad()
loss.backward()
opt.step()
Exercises
- Print loss every 10 epochs
- Change optimizer to SGD
Assignment
Implement early stopping logic.
MCQ Quiz
- 1. First step in training loop?
A. Forward pass
B. Backward pass
C. Save model
D. Eval - 2. backward() computes?
A. Gradients
B. Loss
C. Output
D. Accuracy - 3. optimizer.step() does?
A. Update weights
B. Reset loss
C. Load data
D. Print output - 4. optimizer.zero_grad() is used to?
A. Clear gradients
B. Save memory
C. Increase speed
D. Stop training - 5. Epoch means?
A. One full dataset pass
B. One batch
C. One layer
D. One sample
Module 8: Evaluation
Explanation:
This module focuses on evaluating trained models by switching to evaluation mode and
disabling gradient computation.
Learning Outcomes:
- Use model.eval() correctly
- Disable gradients during evaluation
- Run inference safely
model.eval()
with torch.no_grad():
print(model(torch.tensor([[3.0,5.0]])))
Exercises
- Test model with multiple inputs
- Compare outputs in train vs eval mode
Assignment
Write evaluation code for accuracy calculation.
MCQ Quiz
- 1. model.eval() is used for?
A. Evaluation
B. Training
C. Saving
D. Loading - 2. torch.no_grad() does?
A. Disables gradients
B. Speeds GPU
C. Clears memory
D. Stops training - 3. Evaluation mode affects?
A. Dropout & BatchNorm
B. Loss only
C. Optimizer
D. Dataset - 4. Evaluation should be done?
A. After training
B. Before training
C. During loading
D. During saving - 5. Gradients are needed during eval?
A. No
B. Yes
C. Sometimes
D. Always
Module 9: Save & Load
Explanation:
This module teaches how to save trained models to disk and load them later without retraining.
Learning Outcomes:
- Save model parameters
- Reload trained models
- Reuse models for inference or deployment
torch.save(model.state_dict(),"model.pth")
model.load_state_dict(torch.load("model.pth"))
Exercises
- Save model after training
- Reload model and predict
Assignment
Implement versioned model saving.
MCQ Quiz
- 1. state_dict stores?
A. Model parameters
B. Dataset
C. Loss
D. Code - 2. torch.save() is used to?
A. Save model
B. Train model
C. Evaluate
D. Load data - 3. Model loading requires?
A. Same architecture
B. Same data
C. Same loss
D. Same optimizer - 4. Saved models are useful for?
A. Deployment
B. Data cleaning
C. Visualization
D. Preprocessing - 5. File extension commonly used?
A. .pth
B. .csv
C. .txt
D. .json
Module 10: Binary Classification
Explanation:
This module integrates all previous concepts to build a complete binary classification
model using a neural network.
Learning Outcomes:
- Build an end-to-end classification model
- Use sigmoid activation and BCELoss
- Train and evaluate a real-world model
import torch, torch.nn as nn
x = torch.randn(200,2)
y = (x.sum(1)>0).float().unsqueeze(1)
model = nn.Sequential(
nn.Linear(2,8),
nn.ReLU(),
nn.Linear(8,1),
nn.Sigmoid()
)
loss_fn = nn.BCELoss()
opt = torch.optim.Adam(model.parameters(), lr=0.01)
for _ in range(300):
loss = loss_fn(model(x), y)
opt.zero_grad()
loss.backward()
opt.step()
print("Training done")
Mini Project
- Generate your own dataset
- Train binary classifier
- Evaluate accuracy
Final Assignment
Build a spam vs ham classifier using PyTorch.
MCQ Quiz
- 1. Binary classification predicts?
A. Two classes
B. Multiple classes
C. Numbers
D. Images - 2. Sigmoid output range?
A. 0 to 1
B. -1 to 1
C. 0 to 10
D. Any value - 3. BCELoss is used for?
A. Binary classification
B. Regression
C. Clustering
D. Detection - 4. Threshold commonly used?
A. 0.5
B. 1.0
C. 0.1
D. 0.9 - 5. This module combines?
A. All previous concepts
B. Only tensors
C. Only datasets
D. Only saving
Comments
Post a Comment