PyTorch and Neural Networks 2

PyTorch and Neural Networks 2

YuJa recording of lecture

Topics mentioned at the board (not in this notebook):

  • Importance of using activation functions to break linearity.

  • Common choices of activation functions: sigmoid and relu.

  • Concept of one hot encoding.

from tqdm.std import tqdm, trange
from tqdm import notebook
notebook.tqdm = tqdm
notebook.trange = trange

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

import torch
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
# Load the data
training_data = datasets.MNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor(),
)

test_data = datasets.MNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor(),
)

Second YouTube video on Neural Networks from 3Blue1Brown. This video is on gradient descent. Recommended clips:

  • 0:25-1:24

  • 3:18-4:05

  • 5:15-7:50

This is what we finished with on Monday:

class ThreeBlue(nn.Module):
    def __init__(self):
        super().__init__()
        self.flatten = nn.Flatten()
        self.layers = nn.Sequential(
            nn.Linear(784,10)
        )

    def forward(self,x):
        y = self.flatten(x)
        z = self.layers(y)
        return z

We instantiate an object in this class as follows.

wed = ThreeBlue()

In class (see the YuJa recording above), we gradually built up to the following code. It was designed to match the 3Blue1Brown video’s neural network.

class ThreeBlue(nn.Module):
    def __init__(self):
        super().__init__()
        self.flatten = nn.Flatten()
        self.layers = nn.Sequential(
            nn.Linear(784,16),
            nn.Sigmoid(),
            nn.Linear(16,16),
            nn.Sigmoid(),
            nn.Linear(16,10),
            nn.Sigmoid()
        )

    def forward(self,x):
        x = x/255
        y = self.flatten(x)
        z = self.layers(y)
        return z
wed = ThreeBlue()

Here are the weights and biases for this neural network. When we talk about fitting or training a neural network, we mean adjust the weights and biases to try to minimize some loss function.

for p in wed.parameters():
    print(p.shape)
torch.Size([16, 784])
torch.Size([16])
torch.Size([16, 16])
torch.Size([16])
torch.Size([10, 16])
torch.Size([10])
for p in wed.parameters():
    print(p.numel())
12544
16
256
16
160
10

Notice that this is the same 13002 number which appeared in the 3Blue1Brown videos.

sum([p.numel() for p in wed.parameters()])
13002

You can even do the same thing without the square brackets. This is secretly using a generator expression instead of a list comprehension.

sum(p.numel() for p in wed.parameters())
13002
wed
ThreeBlue(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (layers): Sequential(
    (0): Linear(in_features=784, out_features=16, bias=True)
    (1): Sigmoid()
    (2): Linear(in_features=16, out_features=16, bias=True)
    (3): Sigmoid()
    (4): Linear(in_features=16, out_features=10, bias=True)
    (5): Sigmoid()
  )
)

In the line that begins self.layers = above, we were specifying that each ThreeBlue object should have a layers attribute. Here is that attribute for the case of wed.

wed.layers
Sequential(
  (0): Linear(in_features=784, out_features=16, bias=True)
  (1): Sigmoid()
  (2): Linear(in_features=16, out_features=16, bias=True)
  (3): Sigmoid()
  (4): Linear(in_features=16, out_features=10, bias=True)
  (5): Sigmoid()
)

You can access for example the second element of wed.layers using subscripting, wed.layers[2].

wed.layers[2]
Linear(in_features=16, out_features=16, bias=True)
wed.layers[2].weight.shape
torch.Size([16, 16])
wed.layers[2].bias.shape
torch.Size([16])

On Monday, we were having to divide by 255 each time we input data to our neural network. Today, we’ve put that step directly into the forward method of the neural network; it’s the line x = x/255.

wed(training_data.data)[:3]
tensor([[0.5128, 0.3936, 0.5649, 0.5723, 0.5577, 0.5520, 0.5960, 0.5789, 0.4498,
         0.5246],
        [0.5123, 0.3924, 0.5644, 0.5736, 0.5568, 0.5522, 0.5969, 0.5799, 0.4494,
         0.5249],
        [0.5124, 0.3922, 0.5646, 0.5733, 0.5572, 0.5532, 0.5970, 0.5803, 0.4490,
         0.5274]], grad_fn=<SliceBackward0>)
y_pred = wed(training_data.data)
training_data.targets[:3]
tensor([5, 0, 4])

To match the 3Blue1Brown video, we are going to convert the targets, which are integers like 5, into length 10 vectors like [0,0,0,0,0,1,0,0,0,0]. This procedure is called one-hot encoding, and it also exists in scikit-learn.

from torch.nn.functional import one_hot
one_hot(training_data.targets[:3], num_classes=10).to(torch.float)
tensor([[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
        [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]])
y_true = one_hot(training_data.targets, num_classes=10).to(torch.float)
y_true.shape
torch.Size([60000, 10])

Using Mean-Squared Error on the probabilities for this classification problem is not considered the best approach, but it is easy to understand, and we will follow this approach for now to match the 3Blue1Brown video.

loss_fn = nn.MSELoss()

Here is the performance of the randomly initialized model. The output of this sort of loss function is not so easy to analyze in isolation. The important thing is that if we can lower this number, then the model is performing better (on the training data).

loss_fn(y_pred, y_true)
tensor(0.2792, grad_fn=<MseLossBackward0>)

Here we try to find better weights and biases using gradient descent. Try to get comfortable with these steps (they can take some time to internalize).

optimizer = torch.optim.SGD(wed.parameters(), lr=0.1)

There aren’t yet any gradients associated with the parameters of the model (the weights and biases).

for p in wed.parameters():
    print(p.grad)
None
None
None
None
None
None
loss = loss_fn(y_pred, y_true)

Still no gradients.

for p in wed.parameters():
    print(p.grad)
None
None
None
None
None
None
loss.backward()

The line loss.backward() told PyTorch to compute the gradients of the loss calculation with respect to the 13002 weights and biases.

for p in wed.parameters():
    print(p.grad)
tensor([[0., 0., 0.,  ..., 0., 0., 0.],
        [0., 0., 0.,  ..., 0., 0., 0.],
        [0., 0., 0.,  ..., 0., 0., 0.],
        ...,
        [0., 0., 0.,  ..., 0., 0., 0.],
        [0., 0., 0.,  ..., 0., 0., 0.],
        [0., 0., 0.,  ..., 0., 0., 0.]])
tensor([-1.8313e-04,  6.0323e-05, -7.9682e-05,  1.3652e-04,  2.8588e-04,
         5.4873e-05,  5.6339e-04, -4.3447e-04,  8.5402e-05,  4.7174e-05,
         2.8684e-04,  2.4752e-04,  3.4093e-04,  1.2426e-04,  7.8342e-05,
         3.8503e-04])
tensor([[ 1.5513e-03,  1.4997e-03,  1.4607e-03,  1.5071e-03,  1.4121e-03,
          1.5751e-03,  1.5298e-03,  1.5007e-03,  1.4708e-03,  1.5008e-03,
          1.6073e-03,  1.6327e-03,  1.5269e-03,  1.4076e-03,  1.5441e-03,
          1.3184e-03],
        [ 2.4013e-03,  2.3361e-03,  2.2776e-03,  2.3523e-03,  2.1835e-03,
          2.4395e-03,  2.3866e-03,  2.3222e-03,  2.2672e-03,  2.3256e-03,
          2.4822e-03,  2.5124e-03,  2.3493e-03,  2.1990e-03,  2.4188e-03,
          2.0566e-03],
        [-2.2165e-06, -9.6794e-06, -2.1568e-05, -1.9828e-05, -1.8939e-05,
         -9.7146e-06, -1.5075e-05, -4.0656e-05, -1.7720e-05,  3.4108e-09,
         -3.3992e-05, -3.1905e-05, -2.0933e-05, -1.9765e-05, -2.4898e-05,
         -2.6694e-05],
        [-2.8100e-04, -2.8200e-04, -2.7381e-04, -2.9060e-04, -2.6077e-04,
         -2.7536e-04, -2.9915e-04, -2.7826e-04, -2.6433e-04, -2.5710e-04,
         -2.9216e-04, -2.8800e-04, -2.8381e-04, -2.6977e-04, -3.0117e-04,
         -2.5156e-04],
        [ 1.0721e-03,  1.0468e-03,  1.0277e-03,  1.0680e-03,  9.8862e-04,
          1.0792e-03,  1.0801e-03,  1.0512e-03,  1.0151e-03,  1.0319e-03,
          1.1056e-03,  1.1206e-03,  1.0652e-03,  1.0057e-03,  1.1023e-03,
          9.3069e-04],
        [ 5.5203e-04,  5.5129e-04,  5.4008e-04,  5.5955e-04,  5.1289e-04,
          5.7207e-04,  5.5648e-04,  5.5705e-04,  5.2402e-04,  5.2216e-04,
          5.8920e-04,  5.9263e-04,  5.4806e-04,  5.1084e-04,  5.7347e-04,
          4.7725e-04],
        [-1.0829e-03, -1.0391e-03, -1.0199e-03, -1.0541e-03, -9.7853e-04,
         -1.0998e-03, -1.0704e-03, -1.0504e-03, -1.0328e-03, -1.0516e-03,
         -1.1339e-03, -1.1448e-03, -1.0772e-03, -9.9053e-04, -1.0844e-03,
         -9.3149e-04],
        [ 4.0089e-04,  3.7560e-04,  3.5750e-04,  3.7532e-04,  3.4274e-04,
          3.7918e-04,  3.9848e-04,  3.5818e-04,  3.5726e-04,  3.8072e-04,
          3.8984e-04,  3.9334e-04,  3.8159e-04,  3.6648e-04,  4.0230e-04,
          3.4276e-04],
        [ 1.0015e-04,  1.1278e-04,  1.0807e-04,  1.1097e-04,  1.0148e-04,
          9.9824e-05,  1.1656e-04,  1.0273e-04,  9.4862e-05,  9.2009e-05,
          9.2207e-05,  8.7203e-05,  9.4461e-05,  1.0173e-04,  1.1096e-04,
          9.0506e-05],
        [-2.7376e-04, -2.6440e-04, -2.5965e-04, -2.7287e-04, -2.4827e-04,
         -2.6744e-04, -2.7625e-04, -2.6334e-04, -2.5119e-04, -2.5872e-04,
         -2.7264e-04, -2.7600e-04, -2.7239e-04, -2.6221e-04, -2.8732e-04,
         -2.3753e-04],
        [-3.0894e-05, -4.6069e-05, -5.6068e-05, -6.0372e-05, -4.7414e-05,
         -3.2222e-05, -5.3234e-05, -5.4248e-05, -3.5233e-05, -2.8325e-05,
         -4.0417e-05, -3.6876e-05, -3.9883e-05, -5.7270e-05, -6.4234e-05,
         -5.3041e-05],
        [-2.0521e-03, -1.9898e-03, -1.9339e-03, -2.0009e-03, -1.8605e-03,
         -2.0647e-03, -2.0414e-03, -1.9682e-03, -1.9259e-03, -1.9836e-03,
         -2.1029e-03, -2.1329e-03, -2.0004e-03, -1.8793e-03, -2.0647e-03,
         -1.7582e-03],
        [-1.1316e-03, -1.1034e-03, -1.0669e-03, -1.1025e-03, -1.0163e-03,
         -1.1266e-03, -1.1373e-03, -1.0711e-03, -1.0502e-03, -1.0853e-03,
         -1.1463e-03, -1.1481e-03, -1.0907e-03, -1.0356e-03, -1.1419e-03,
         -9.8145e-04],
        [ 6.4383e-04,  6.2033e-04,  5.9793e-04,  6.2632e-04,  5.6714e-04,
          6.3456e-04,  6.4794e-04,  6.0152e-04,  5.9248e-04,  6.1108e-04,
          6.5407e-04,  6.5323e-04,  6.2586e-04,  5.9097e-04,  6.5470e-04,
          5.5457e-04],
        [ 6.4822e-04,  6.2109e-04,  6.1811e-04,  6.3792e-04,  5.9188e-04,
          6.5852e-04,  6.3653e-04,  6.2957e-04,  6.1760e-04,  6.3564e-04,
          6.8030e-04,  6.9065e-04,  6.4661e-04,  5.9874e-04,  6.5402e-04,
          5.7052e-04],
        [ 1.0332e-03,  1.0129e-03,  9.8541e-04,  1.0194e-03,  9.3670e-04,
          1.0267e-03,  1.0482e-03,  9.8933e-04,  9.5952e-04,  9.9341e-04,
          1.0564e-03,  1.0607e-03,  9.9555e-04,  9.5971e-04,  1.0609e-03,
          9.1640e-04]])
tensor([ 2.9285e-03,  4.5277e-03, -4.0536e-05, -5.2834e-04,  2.0315e-03,
         1.0567e-03, -2.0450e-03,  7.2072e-04,  1.9550e-04, -5.0986e-04,
        -8.0033e-05, -3.8523e-03, -2.1089e-03,  1.1834e-03,  1.2280e-03,
         1.9357e-03])
tensor([[0.0107, 0.0118, 0.0100, 0.0125, 0.0115, 0.0129, 0.0104, 0.0088, 0.0129,
         0.0120, 0.0106, 0.0071, 0.0137, 0.0115, 0.0074, 0.0090],
        [0.0069, 0.0076, 0.0065, 0.0081, 0.0075, 0.0083, 0.0067, 0.0057, 0.0084,
         0.0078, 0.0068, 0.0045, 0.0089, 0.0075, 0.0048, 0.0058],
        [0.0119, 0.0130, 0.0111, 0.0138, 0.0128, 0.0143, 0.0115, 0.0098, 0.0143,
         0.0133, 0.0117, 0.0078, 0.0152, 0.0128, 0.0082, 0.0100],
        [0.0119, 0.0131, 0.0112, 0.0139, 0.0129, 0.0143, 0.0115, 0.0098, 0.0144,
         0.0134, 0.0118, 0.0079, 0.0153, 0.0128, 0.0083, 0.0101],
        [0.0118, 0.0129, 0.0110, 0.0137, 0.0127, 0.0141, 0.0114, 0.0097, 0.0142,
         0.0132, 0.0116, 0.0078, 0.0151, 0.0126, 0.0081, 0.0099],
        [0.0118, 0.0130, 0.0111, 0.0138, 0.0128, 0.0142, 0.0115, 0.0097, 0.0143,
         0.0133, 0.0117, 0.0078, 0.0152, 0.0127, 0.0082, 0.0100],
        [0.0124, 0.0136, 0.0116, 0.0145, 0.0134, 0.0149, 0.0120, 0.0102, 0.0150,
         0.0139, 0.0122, 0.0082, 0.0159, 0.0133, 0.0086, 0.0105],
        [0.0120, 0.0132, 0.0112, 0.0140, 0.0130, 0.0144, 0.0116, 0.0099, 0.0145,
         0.0135, 0.0118, 0.0079, 0.0154, 0.0129, 0.0083, 0.0101],
        [0.0090, 0.0099, 0.0084, 0.0105, 0.0097, 0.0109, 0.0087, 0.0074, 0.0109,
         0.0101, 0.0089, 0.0059, 0.0116, 0.0097, 0.0062, 0.0076],
        [0.0110, 0.0121, 0.0103, 0.0129, 0.0119, 0.0133, 0.0107, 0.0091, 0.0133,
         0.0124, 0.0109, 0.0073, 0.0141, 0.0118, 0.0076, 0.0093]])
tensor([0.0207, 0.0134, 0.0229, 0.0230, 0.0227, 0.0229, 0.0240, 0.0232, 0.0174,
        0.0213])

The next line adjusts the weights and biases by adding a multiple of the negative gradient. (We are trying to minimize the loss, and the gradient points in the direction of fastest ascent, and the negative gradient points in the direction of fastest descent.) The multiple we use is determined by the learning rate lr that we specified when we created the optimizer above.

optimizer.step()
wed(training_data.data)[:3]
tensor([[0.5100, 0.3918, 0.5617, 0.5692, 0.5546, 0.5489, 0.5928, 0.5757, 0.4474,
         0.5216],
        [0.5095, 0.3906, 0.5612, 0.5705, 0.5537, 0.5491, 0.5936, 0.5768, 0.4471,
         0.5219],
        [0.5096, 0.3905, 0.5614, 0.5701, 0.5540, 0.5501, 0.5938, 0.5771, 0.4466,
         0.5244]], grad_fn=<SliceBackward0>)

We now want to repeat that procedure. Here we will repeat it 10 times, but often we will want to repeat it many more times. What we hope is that the loss value is decreasing.

epochs = 10

for i in range(epochs):
    y_true = one_hot(training_data.targets, num_classes=10).to(torch.float)
    y_pred = wed(training_data.data)
    loss = loss_fn(y_true,y_pred)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    print(loss)
tensor(0.2767, grad_fn=<MseLossBackward0>)
tensor(0.2741, grad_fn=<MseLossBackward0>)
tensor(0.2717, grad_fn=<MseLossBackward0>)
tensor(0.2692, grad_fn=<MseLossBackward0>)
tensor(0.2668, grad_fn=<MseLossBackward0>)
tensor(0.2644, grad_fn=<MseLossBackward0>)
tensor(0.2620, grad_fn=<MseLossBackward0>)
tensor(0.2597, grad_fn=<MseLossBackward0>)
tensor(0.2574, grad_fn=<MseLossBackward0>)
tensor(0.2551, grad_fn=<MseLossBackward0>)

An important thing to point out is that if we run the same code again, we won’t be starting back at the beginning. Each time we run this training procedure, it will begin where the last training procedure left off.

epochs = 100

for i in range(epochs):
    y_true = one_hot(training_data.targets, num_classes=10).to(torch.float)
    y_pred = wed(training_data.data)
    loss = loss_fn(y_true,y_pred)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    if i%2 == 0:
        print(loss)
tensor(0.2528, grad_fn=<MseLossBackward0>)
tensor(0.2484, grad_fn=<MseLossBackward0>)
tensor(0.2441, grad_fn=<MseLossBackward0>)
tensor(0.2399, grad_fn=<MseLossBackward0>)
tensor(0.2358, grad_fn=<MseLossBackward0>)
tensor(0.2318, grad_fn=<MseLossBackward0>)
tensor(0.2280, grad_fn=<MseLossBackward0>)
tensor(0.2242, grad_fn=<MseLossBackward0>)
tensor(0.2206, grad_fn=<MseLossBackward0>)
tensor(0.2170, grad_fn=<MseLossBackward0>)
tensor(0.2136, grad_fn=<MseLossBackward0>)
tensor(0.2102, grad_fn=<MseLossBackward0>)
tensor(0.2069, grad_fn=<MseLossBackward0>)
tensor(0.2038, grad_fn=<MseLossBackward0>)
tensor(0.2007, grad_fn=<MseLossBackward0>)
tensor(0.1977, grad_fn=<MseLossBackward0>)
tensor(0.1948, grad_fn=<MseLossBackward0>)
tensor(0.1920, grad_fn=<MseLossBackward0>)
tensor(0.1893, grad_fn=<MseLossBackward0>)
tensor(0.1866, grad_fn=<MseLossBackward0>)
tensor(0.1840, grad_fn=<MseLossBackward0>)
tensor(0.1815, grad_fn=<MseLossBackward0>)
tensor(0.1791, grad_fn=<MseLossBackward0>)
tensor(0.1768, grad_fn=<MseLossBackward0>)
tensor(0.1745, grad_fn=<MseLossBackward0>)
tensor(0.1723, grad_fn=<MseLossBackward0>)
tensor(0.1701, grad_fn=<MseLossBackward0>)
tensor(0.1681, grad_fn=<MseLossBackward0>)
tensor(0.1661, grad_fn=<MseLossBackward0>)
tensor(0.1641, grad_fn=<MseLossBackward0>)
tensor(0.1622, grad_fn=<MseLossBackward0>)
tensor(0.1604, grad_fn=<MseLossBackward0>)
tensor(0.1586, grad_fn=<MseLossBackward0>)
tensor(0.1568, grad_fn=<MseLossBackward0>)
tensor(0.1552, grad_fn=<MseLossBackward0>)
tensor(0.1535, grad_fn=<MseLossBackward0>)
tensor(0.1520, grad_fn=<MseLossBackward0>)
tensor(0.1504, grad_fn=<MseLossBackward0>)
tensor(0.1489, grad_fn=<MseLossBackward0>)
tensor(0.1475, grad_fn=<MseLossBackward0>)
tensor(0.1461, grad_fn=<MseLossBackward0>)
tensor(0.1447, grad_fn=<MseLossBackward0>)
tensor(0.1434, grad_fn=<MseLossBackward0>)
tensor(0.1421, grad_fn=<MseLossBackward0>)
tensor(0.1409, grad_fn=<MseLossBackward0>)
tensor(0.1397, grad_fn=<MseLossBackward0>)
tensor(0.1385, grad_fn=<MseLossBackward0>)
tensor(0.1374, grad_fn=<MseLossBackward0>)
tensor(0.1363, grad_fn=<MseLossBackward0>)
tensor(0.1352, grad_fn=<MseLossBackward0>)

Notice how the loss is steadily decreasing. That’s the best result we can hope for. If we were to choose a learning rate that was much too big, the performance would be very different. Here we set lr=500 which is much too big.

wed = ThreeBlue()
optimizer = torch.optim.SGD(wed.parameters(), lr=500)
epochs = 10

for i in range(epochs):
    y_true = one_hot(training_data.targets, num_classes=10).to(torch.float)
    y_pred = wed(training_data.data)
    loss = loss_fn(y_true,y_pred)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    print(loss)
tensor(0.2877, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)
tensor(0.1000, grad_fn=<MseLossBackward0>)

Here it improves for one iteration of gradient descent, and then it seems to get stuck.