All Problems Description Template Solution

Dropout

Train/eval mode, inverted scaling

Easy Fundamentals

Problem Description

Implement Dropout regularization from scratch.

Signature

class MyDropout(nn.Module): def __init__(self, p: float = 0.5): ... def forward(self, x: Tensor) -> Tensor: ...

Rules

• During training: zero each element with probability p, scale remaining by 1/(1-p)

• During eval: return input unchanged (identity)

• Do NOT use nn.Dropout or F.dropout

Template

Implement the function below. Use only basic PyTorch operations.

# ✏️ YOUR IMPLEMENTATION HERE class MyDropout(nn.Module): def __init__(self, p=0.5): super().__init__() pass def forward(self, x): pass

Test Your Implementation

Use this code to debug before submitting.

# 🧪 Debug d = MyDropout(p=0.5) d.train() x = torch.ones(10) print('Train:', d(x)) d.eval() print('Eval: ', d(x))

Reference Solution

Try solving it yourself first! Click below to reveal the solution.

# ✅ SOLUTION class MyDropout(nn.Module): def __init__(self, p=0.5): super().__init__() self.p = p def forward(self, x): if not self.training or self.p == 0: return x mask = (torch.rand_like(x) > self.p).float() return x * mask / (1 - self.p)

Tips

Run Locally

For interactive practice with auto-grading, run TorchCode locally:
pip install torch-judge then use check("dropout")

Key Concepts

Train/eval mode, inverted scaling

Dropout

Description Template Test Solution Tips