It’s through simple examples and the basics that one grasps a framework. This is true for scientific (mathematical) frameworks and software stacks. The snippet below is the essence of a Torch model and I always start from this simple setup to assemble more complex things.
import torchimport matplotlib.pyplot as pltx_input = torch.FloatTensor([[0],[1],[2],[3],[4]])y_input = torch.FloatTensor([[8],[10],[9],[21],[12]])x, y = torch.autograd.Variable(x_input), torch.autograd.Variable(y_input)net = torch.nn.Sequential( torch.nn.Linear(1,16), torch.nn.Tanh(), torch.nn.Linear(16,10), torch.nn.Linear(10,1) )optimizer = torch.optim.Adam(net.parameters(), lr=0.1)loss_func = torch.nn.MSELoss() loss_sequence =list()for t inrange(1000): prediction = net(x) # input x and predict based on x loss = loss_func(prediction, y) # must be (1. nn output, 2. target) loss_sequence.append(loss.data.numpy()) optimizer.zero_grad() # clear gradients for next train loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradientsplt.plot(loss_sequence) net(x)