PyTorch | Sheetly Cheat Sheet

Last Updated: November 21, 2025

PyTorch

Deep learning framework by Meta

Core Concepts

Item Description
Tensor Multi-dimensional array
Autograd Automatic differentiation
nn.Module Base class for models
Optimizer Updates model parameters
DataLoader Batch data loading
GPU Support CUDA acceleration

Basic Example

import torch
import torch.nn as nn

# Define model
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(784, 128)
        self.fc2 = nn.Linear(128, 10)
    
    def forward(self, x):
        x = torch.relu(self.fc1(x))
        return self.fc2(x)

model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())

# Training step
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()

Common Operations

Item Description
torch.tensor() Create tensor
tensor.cuda() Move to GPU
model.train() Set training mode
model.eval() Set evaluation mode
torch.save() Save model
torch.load() Load model

Best Practices

  • Use DataLoader for efficient batching
  • Move data and model to same device (CPU/GPU)
  • Use torch.no_grad() for inference
  • Clear gradients with optimizer.zero_grad()

💡 Pro Tips

Quick Reference

PyTorch provides dynamic computation graphs

← Back to Data Science & ML | Browse all categories | View all cheat sheets