PyTorch basic pattern

GraphML
Snippet to get started with PyTorch ML.

It’s through simple examples and the basics that one grasps a framework. This is true for scientific (mathematical) frameworks and software stacks. The snippet below is the essence of a Torch model and I always start from this simple setup to assemble more complex things.

import torch
import matplotlib.pyplot as plt

x_input = torch.FloatTensor([[0],[1],[2],[3],[4]])
y_input = torch.FloatTensor([[8],[10],[9],[21],[12]])

x, y = torch.autograd.Variable(x_input), torch.autograd.Variable(y_input)

net = torch.nn.Sequential(
          torch.nn.Linear(1,16),
          torch.nn.Tanh(),
          torch.nn.Linear(16,10),
          torch.nn.Linear(10,1)
        )

optimizer = torch.optim.Adam(net.parameters(), lr=0.1)
loss_func = torch.nn.MSELoss() 
loss_sequence = list()
for t in range(1000):
    prediction = net(x)     # input x and predict based on x

    loss = loss_func(prediction, y)     # must be (1. nn output, 2. target)
    loss_sequence.append(loss.data.numpy())
    optimizer.zero_grad()   # clear gradients for next train
    loss.backward()         # backpropagation, compute gradients
    optimizer.step()        # apply gradients

plt.plot(loss_sequence)   
net(x)
tensor([[ 8.0000],
        [10.0000],
        [ 9.0000],
        [21.0001],
        [12.0001]], grad_fn=<AddmmBackward0>)