Practical deep learning with real code, exercises, quizzes, and assignments.
10 Modules
Code + Exercises
MCQ Quizzes
01PyTorch Basics
// Overview
This module introduces PyTorch tensors, the fundamental data structure used to store data in PyTorch. You learn how to create tensors, inspect their shape, and understand data types.
// Learning Outcomes
▶Understand what a tensor is
▶Create tensors using PyTorch
▶Inspect tensor shape and data type
// Code
python
import torch
x = torch.tensor([1, 2, 3])
print(x)
print(x.dtype)
print(x.shape)
๐
Exercises
Create a tensor with values [10, 20, 30]
Print tensor shape and data type
Create a 2×2 zero tensor
๐ฏ
Assignment
Create a tensor representing marks of 5 students and compute the average.
๐ง
MCQ Quiz
1. What is the core data structure in PyTorch?
A. List
B. Array
C. Tensor
D. Matrix
2. Which function creates a tensor?
A. torch.create()
B. torch.tensor()
C. torch.make()
D. torch.build()
3. What does x.shape return?
A. Data type
B. Dimensions of tensor
C. Memory size
D. Values
4. PyTorch is mainly used for?
A. Web design
B. Deep learning
C. OS development
D. Gaming
5. Which library supports GPU acceleration?
A. Pandas
B. NumPy
C. PyTorch
D. Matplotlib
02Tensor Operations
// Overview
This module covers basic mathematical operations on tensors such as addition, multiplication, and dot products using PyTorch.
// Learning Outcomes
▶Perform arithmetic operations on tensors
▶Apply element-wise operations
▶Understand tensor math behavior
// Code
python
import torch
a = torch.tensor([2, 4])
b = torch.tensor([1, 3])
print(a + b)
print(a * b)
print(torch.dot(a, b))
๐
Exercises
Subtract tensors
Compute mean
๐ฏ
Assignment
Simulate monthly expenses using tensors.
๐ง
MCQ Quiz
1. Which operator performs element-wise addition?
A. +
B. dot()
C. sum()
D. mean()
2. What does torch.dot() compute?
A. Element-wise product
B. Dot product
C. Matrix multiplication
D. Sum
3. Which operation multiplies tensors element-wise?
A. torch.dot()
B. *
C. torch.mul_all()
D. add()
4. Tensor operations are?
A. Scalar only
B. Slow
C. Vectorized
D. Manual
5. Which function calculates average?
A. torch.sum()
B. torch.mean()
C. torch.avg()
D. torch.calc()
03Autograd
// Overview
This module explains PyTorch's autograd system, which automatically computes gradients required for training neural networks.
// Learning Outcomes
▶Understand automatic differentiation
▶Compute gradients using backward()
▶Explain the role of gradients in learning
// Code
python
import torch
x = torch.tensor(2.0, requires_grad=True)
y = x**3 + 4*x
y.backward()
print(x.grad)
๐
Exercises
Compute gradient of y = x² + 5x
Change value of x and observe gradient
๐ฏ
Assignment
Derive gradient for a cubic equation using autograd.
๐ง
MCQ Quiz
1. What does autograd compute?
A. Values
B. Gradients
C. Shapes
D. Loss
2. Which flag enables gradient tracking?
A. grad=True
B. requires_grad=True
C. track=True
D. backward=True
3. Which method computes gradients?
A. grad()
B. backward()
C. step()
D. compute()
4. Gradients are used in?
A. Evaluation
B. Optimization
C. Data loading
D. Saving
5. Gradients are stored in?
A. x.value
B. x.grad
C. x.data
D. x.diff
04Linear Regression
// Overview
In this module, you build a simple linear regression model using PyTorch's built-in layers, loss functions, and optimizers.
// Learning Outcomes
▶Build a linear regression model
▶Use loss functions and optimizers
▶Train a model using gradient descent
// Code
python
import torch
x = torch.tensor([[1.],[2.],[3.]])
y = torch.tensor([[2.],[4.],[6.]])
model = torch.nn.Linear(1,1)
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for _ in range(200):
loss = loss_fn(model(x), y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(model.weight.item(), model.bias.item())
๐
Exercises
Change learning rate and observe loss
Increase training epochs
๐ฏ
Assignment
Train a model to predict salary based on years of experience.
๐ง
MCQ Quiz
1. Linear regression predicts?
A. Categories
B. Continuous values
C. Images
D. Text
2. Which loss is used for regression?
A. BCELoss
B. MSELoss
C. CrossEntropy
D. L1Loss
3. Which optimizer updates weights?
A. Loss
B. Optimizer
C. Dataset
D. Model
4. Learning rate controls?
A. Data size
B. Step size
C. Accuracy
D. Output
5. Which layer is used?
A. nn.Linear
B. nn.Conv2d
C. nn.RNN
D. nn.Dropout
05Dataset & DataLoader
// Overview
This module introduces Dataset and DataLoader classes to efficiently load data in batches during training.
// Learning Outcomes
▶Create a custom Dataset class
▶Use DataLoader for batching
▶Understand data iteration during training
// Code
python
from torch.utils.data import Dataset, DataLoader
import torch
class MyData(Dataset):
def __init__(self):
self.x = torch.arange(1,11).float()
self.y = self.x * 2
def __len__(self):
return len(self.x)
def __getitem__(self, i):
return self.x[i], self.y[i]
loader = DataLoader(MyData(), batch_size=2)
for x, y in loader:
print(x, y)
๐
Exercises
Change batch size to 5
Add random noise to dataset
๐ฏ
Assignment
Create a Dataset class for student marks and grades.
๐ง
MCQ Quiz
1. Dataset class is used to?
A. Store data
B. Train model
C. Optimize loss
D. Save model
2. DataLoader provides?
A. Prediction
B. Batching
C. Loss
D. Accuracy
3. __getitem__ returns?
A. Length
B. One sample
C. All data
D. Batch size
4. DataLoader improves?
A. Accuracy
B. Efficiency
C. Loss
D. Weights
5. Batch size controls?
A. Epochs
B. Samples per iteration
C. Learning rate
D. Layers
06Neural Network
// Overview
This module demonstrates how to construct a multi-layer neural network using linear layers and activation functions.
// Learning Outcomes
▶Understand neural network architecture
▶Create hidden layers
▶Apply activation functions
// Code
python
import torch.nn as nn
model = nn.Sequential(
nn.Linear(2, 4),
nn.ReLU(),
nn.Linear(4, 1)
)
print(model)
๐
Exercises
Add another hidden layer
Change activation function to Tanh
๐ฏ
Assignment
Design a neural network with 3 hidden layers.
๐ง
MCQ Quiz
1. Neural networks consist of?
A. Layers
B. Tables
C. Loops
D. Files
2. ReLU is a?
A. Loss
B. Activation function
C. Optimizer
D. Dataset
3. Hidden layers increase?
A. Model capacity
B. Data size
C. Speed
D. Storage
4. nn.Sequential is used to?
A. Stack layers
B. Train model
C. Load data
D. Save weights
5. Output layer size depends on?
A. Problem type
B. Dataset size
C. Epochs
D. Optimizer
07Training Loop
// Overview
This module explains the standard training loop used in PyTorch, including forward pass, loss computation, backward pass, and parameter updates.
// Learning Outcomes
▶Implement a complete training loop
▶Understand model parameter updates
▶Monitor training progress
// Code
python
import torch, torch.nn as nn
x = torch.randn(100, 2)
y = x.sum(dim=1, keepdim=True)
model = nn.Linear(2, 1)
opt = torch.optim.Adam(model.parameters(), lr=0.01)
loss_fn = nn.MSELoss()
for _ in range(100):
loss = loss_fn(model(x), y)
opt.zero_grad()
loss.backward()
opt.step()
๐
Exercises
Print loss every 10 epochs
Change optimizer to SGD
๐ฏ
Assignment
Implement early stopping logic.
๐ง
MCQ Quiz
1. First step in training loop?
A. Forward pass
B. Backward pass
C. Save model
D. Eval
2. backward() computes?
A. Gradients
B. Loss
C. Output
D. Accuracy
3. optimizer.step() does?
A. Update weights
B. Reset loss
C. Load data
D. Print output
4. optimizer.zero_grad() is used to?
A. Clear gradients
B. Save memory
C. Increase speed
D. Stop training
5. Epoch means?
A. One full dataset pass
B. One batch
C. One layer
D. One sample
08Evaluation
// Overview
This module focuses on evaluating trained models by switching to evaluation mode and disabling gradient computation.
// Learning Outcomes
▶Use model.eval() correctly
▶Disable gradients during evaluation
▶Run inference safely
// Code
python
model.eval()
with torch.no_grad():
print(model(torch.tensor([[3.0, 5.0]])))
๐
Exercises
Test model with multiple inputs
Compare outputs in train vs eval mode
๐ฏ
Assignment
Write evaluation code for accuracy calculation.
๐ง
MCQ Quiz
1. model.eval() is used for?
A. Evaluation
B. Training
C. Saving
D. Loading
2. torch.no_grad() does?
A. Disables gradients
B. Speeds GPU
C. Clears memory
D. Stops training
3. Evaluation mode affects?
A. Dropout & BatchNorm
B. Loss only
C. Optimizer
D. Dataset
4. Evaluation should be done?
A. After training
B. Before training
C. During loading
D. During saving
5. Gradients are needed during eval?
A. No
B. Yes
C. Sometimes
D. Always
09Save & Load
// Overview
This module teaches how to save trained models to disk and load them later without retraining.
PyTorch Hands-on Course for Beginners ๐ฅ PyTorch Hands-On Course for Beginners 10 practical modules with code, exercises, and assignments. Module 1: PyTorch Basics Explanation: This module introduces PyTorch tensors, the fundamental data structure used to store data in PyTorch. You learn how to create tensors, inspect their shape, and understand data types. Learning Outcomes: Understand what a tensor is Create tensors using PyTorch Inspect tensor shape and data type Create your first tensor. Copy import torch x = torch.tensor([1, 2, 3]) print(x) print(x.dtype) print(x.shape) ...
Comments
Post a Comment