1. General ¶
The core of PyTorch is to provide two main functions:
- n-dimensional tensor, similar to NumPy, but can run on GPU
- Automatic differentiation for constructing and training neural networks
We will use the problem of fitting a third-order polynomial to y = sin(x) as a running example. The network will have four parameters, and gradient descent training will be carried out by minimizing the Euclidean distance between the network output and the actual output to adapt to the random data
2.Numpy¶
Numpy provides an n-dimensional array object and many functions for manipulating these arrays. Numpy is a general framework for scientific computing. It knows nothing about computational maps, deep learning or gradients. However, by using numpy operation to manually realize the forward and backward transmission of the network, we can easily use numpy to adapt the third-order polynomial to the sinusoidal function:
# -*- coding: utf-8 -*- import numpy as np import math # Create random input and output data x = np.linspace(-math.pi, math.pi, 2000) y = np.sin(x) # Randomly initialize weights a = np.random.randn() b = np.random.randn() c = np.random.randn() d = np.random.randn() learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y # y = a + b x + c x^2 + d x^3 y_pred = a + b * x + c * x ** 2 + d * x ** 3 # Compute and print loss loss = np.square(y_pred - y).sum() if t % 100 == 99: print(t, loss) # Backprop to compute gradients of a, b, c, d with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_a = grad_y_pred.sum() grad_b = (grad_y_pred * x).sum() grad_c = (grad_y_pred * x ** 2).sum() grad_d = (grad_y_pred * x ** 3).sum() # Update weights a -= learning_rate * grad_a b -= learning_rate * grad_b c -= learning_rate * grad_c d -= learning_rate * grad_d print(f'Result: y = {a} + {b} x + {c} x^2 + {d} x^3')
99 563.8291015751557 199 375.83806540034493 299 251.52341151625967 399 169.31633903652465 499 114.95407957727652 599 79.00503300369046 699 55.23231027939239 799 39.51159649640557 899 29.115566843371823 999 22.2406918598557 1099 17.694326952652982 1199 14.687792949964951 1299 12.699545397441945 1399 11.384691566299782 1499 10.51515596801245 1599 9.940112958440231 1699 9.559821495029546 1799 9.30832244946974 1899 9.14199655493815 1999 9.031997888657425 Result: y = 0.0008026128609173351 + 0.8425118975937722 x + -0.00013846407404389141 x^2 + -0.0913064454528145 x^3
3. Tensor ¶
Numpy is a great framework, but it can't use GPU to speed up its numerical calculation. For modern deep neural networks, GPU usually provides 50 times or higher acceleration, so unfortunately, numpy is not enough to realize modern deep learning
Here, we introduce the most basic PyTorch concept: tensor. PyTorch tensors are conceptually the same as numpy arrays: tensors are n-dimensional arrays, and PyTorch provides many functions that operate on these tensors. Behind the scenes, tensors can track computational graphs and gradients, but they can also be used as a general tool for scientific computing
Unlike numpy, PyTorch tensor can use GPU to accelerate its digital calculation. To run PyTorch tensor on GPU, you only need to specify the correct device
Here, we use PyTorch tensor to fit the third-order polynomial into sinusoidal function. Like the numpy example above, we need to manually realize the forward and reverse transmission through the network:
# -*- coding: utf-8 -*- import torch import math dtype = torch.float # device = torch.device("cpu") device = torch.device("cuda:0") # Uncomment this to run on GPU # Create random input and output data x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Randomly initialize weights a = torch.randn((), device=device, dtype=dtype) b = torch.randn((), device=device, dtype=dtype) c = torch.randn((), device=device, dtype=dtype) d = torch.randn((), device=device, dtype=dtype) learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y y_pred = a + b * x + c * x ** 2 + d * x ** 3 # Compute and print loss loss = (y_pred - y).pow(2).sum().item() if t % 100 == 99: print(t, loss) # Backprop to compute gradients of a, b, c, d with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_a = grad_y_pred.sum() grad_b = (grad_y_pred * x).sum() grad_c = (grad_y_pred * x ** 2).sum() grad_d = (grad_y_pred * x ** 3).sum() # Update weights using gradient descent a -= learning_rate * grad_a b -= learning_rate * grad_b c -= learning_rate * grad_c d -= learning_rate * grad_d print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
99 1039.5152587890625 199 707.0489501953125 299 482.3299560546875 399 330.28277587890625 499 227.29885864257812 599 157.47219848632812 699 110.07612609863281 799 77.86994171142578 899 55.96104431152344 999 41.04032516479492 1099 30.867229461669922 1199 23.923267364501953 1299 19.177928924560547 1399 15.93132209777832 1499 13.707606315612793 1599 12.182741165161133 1699 11.135905265808105 1799 10.416431427001953 1899 9.92137622833252 1999 9.580371856689453 Result: y = 0.023760870099067688 + 0.8410876393318176 x + -0.004099148791283369 x^2 + -0.09110385179519653 x^3
4.Autograd¶
In the above example, we must manually implement the forward and backward transfer of the neural network. For small two-layer networks, manual reverse transfer is not a big problem, but for large complex networks, it can quickly become very troublesome
Fortunately, we can use automatic differentiation to automatically calculate the back propagation in neural networks. The Autograd package in PyTorch provides this function. When using Autograd, the forward propagation of the network will define the calculation diagram; The input tensor of the node in the graph is the edge tensor. Then the gradient can be easily calculated by back propagation through the graph
This sounds complicated and very simple in practice. Each tensor represents a node in the calculation graph. If x has x.requires_grad=True tensor, then x.grad is another tensor that maintains the gradient of X relative to a scalar value
Here, we use PyTorch tensor and Autograd to implement our sine function and third-order polynomial examples; Now we no longer need to manually implement reverse delivery through the network:
# -*- coding: utf-8 -*- import torch import math dtype = torch.float # device = torch.device("cpu") device = torch.device("cuda:0") # Uncomment this to run on GPU # Create Tensors to hold input and outputs. # By default, requires_grad=False, which indicates that we do not need to # compute gradients with respect to these Tensors during the backward pass. x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Create random Tensors for weights. For a third order polynomial, we need # 4 weights: y = a + b x + c x^2 + d x^3 # Setting requires_grad=True indicates that we want to compute gradients with # respect to these Tensors during the backward pass. a = torch.randn((), device=device, dtype=dtype, requires_grad=True) b = torch.randn((), device=device, dtype=dtype, requires_grad=True) c = torch.randn((), device=device, dtype=dtype, requires_grad=True) d = torch.randn((), device=device, dtype=dtype, requires_grad=True) learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y using operations on Tensors. y_pred = a + b * x + c * x ** 2 + d * x ** 3 # Compute and print loss using operations on Tensors. # Now loss is a Tensor of shape (1,) # loss.item() gets the scalar value held in the loss. loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) # Use autograd to compute the backward pass. This call will compute the # gradient of loss with respect to all Tensors with requires_grad=True. # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding # the gradient of the loss with respect to a, b, c, d respectively. loss.backward() # Manually update weights using gradient descent. Wrap in torch.no_grad() # because weights have requires_grad=True, but we don't need to track this # in autograd. with torch.no_grad(): a -= learning_rate * a.grad b -= learning_rate * b.grad c -= learning_rate * c.grad d -= learning_rate * d.grad # Manually zero the gradients after updating weights a.grad = None b.grad = None c.grad = None d.grad = None print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
99 110.25205993652344 199 78.43357849121094 299 56.650428771972656 399 41.72026824951172 499 31.475496292114258 599 24.437694549560547 699 19.597450256347656 799 16.264957427978516 899 13.967991828918457 999 12.38306999206543 1099 11.288278579711914 1199 10.531269073486328 1299 10.007284164428711 1399 9.644231796264648 1499 9.39242935180664 1599 9.21764087677002 1699 9.096193313598633 1799 9.011730194091797 1899 8.952939987182617 1999 8.911981582641602 Result: y = -0.009276128374040127 + 0.8526148796081543 x + 0.0016002863412722945 x^2 + -0.09274350851774216 x^3
5. Redefine Autograd ¶
Behind the scenes, each original Autograd operator is actually two functions running on tensors. The forward function calculates the output tensor from the input tensor. The inverse function receives the gradient of the output tensor relative to a scalar value and calculates the gradient of the input tensor relative to the same scalar value
In PyTorch, we can define torch Autograd. Function and implement forward and backward functions to easily define your own Autograd operator. Then, we can construct an instance and call the new Autograd operator like a function, and pass the tensor containing the input data
# -*- coding: utf-8 -*- import torch import math class LegendrePolynomial3(torch.autograd.Function): """ We can implement our own custom autograd Functions by subclassing torch.autograd.Function and implementing the forward and backward passes which operate on Tensors. """ @staticmethod def forward(ctx, input): """ In the forward pass we receive a Tensor containing the input and return a Tensor containing the output. ctx is a context object that can be used to stash information for backward computation. You can cache arbitrary objects for use in the backward pass using the ctx.save_for_backward method. """ ctx.save_for_backward(input) return 0.5 * (5 * input ** 3 - 3 * input) @staticmethod def backward(ctx, grad_output): """ In the backward pass we receive a Tensor containing the gradient of the loss with respect to the output, and we need to compute the gradient of the loss with respect to the input. """ input, = ctx.saved_tensors return grad_output * 1.5 * (5 * input ** 2 - 1) dtype = torch.float # device = torch.device("cpu") device = torch.device("cuda:0") # Uncomment this to run on GPU # Create Tensors to hold input and outputs. # By default, requires_grad=False, which indicates that we do not need to # compute gradients with respect to these Tensors during the backward pass. x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Create random Tensors for weights. For this example, we need # 4 weights: y = a + b * P3(c + d * x), these weights need to be initialized # not too far from the correct result to ensure convergence. # Setting requires_grad=True indicates that we want to compute gradients with # respect to these Tensors during the backward pass. a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True) b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True) c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True) d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True) learning_rate = 5e-6 for t in range(2000): # To apply our Function, we use Function.apply method. We alias this as 'P3'. P3 = LegendrePolynomial3.apply # Forward pass: compute predicted y using operations; we compute # P3 using our custom autograd operation. y_pred = a + b * P3(c + d * x) # Compute and print loss loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) # Use autograd to compute the backward pass. loss.backward() # Update weights using gradient descent with torch.no_grad(): a -= learning_rate * a.grad b -= learning_rate * b.grad c -= learning_rate * c.grad d -= learning_rate * d.grad # Manually zero the gradients after updating weights a.grad = None b.grad = None c.grad = None d.grad = None print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)')
99 209.95834350585938 199 144.66018676757812 299 100.70249938964844 399 71.03520202636719 499 50.978515625 599 37.40313720703125 699 28.20686912536621 799 21.973186492919922 899 17.745729446411133 999 14.877889633178711 1099 12.931766510009766 1199 11.610918998718262 1299 10.714248657226562 1399 10.105474472045898 1499 9.692106246948242 1599 9.411375045776367 1699 9.220745086669922 1799 9.091285705566406 1899 9.003361701965332 1999 8.943639755249023 Result: y = -1.765793067320942e-11 + -2.208526849746704 * P3(9.924167737596079e-11 + 0.2554861009120941 x)
6.nn module ¶
Computational graphs and Autograd are very powerful examples of defining complex operators and automatically adopting derivatives. However, for large neural networks, the original Autograd may be too low-level
When building neural networks, we often think of arranging calculations in layers, some of which have learnable parameters, which will be optimized during learning
In TensorFlow, packages such as Keras, TensorFlow slim and TFLearn provide a higher level of abstraction on the original calculation diagram, which can be used to build neural networks
In PyTorch, the nn package achieves the same goal. nn package defines a set of modules, which are roughly equivalent to the neural network layer. The module receives the input tensor and calculates the output tensor, but can also maintain an internal state, such as a tensor containing learnable parameters. nn package also defines a set of useful loss functions, which are usually used in training neural networks
In this example, we use nn packet to implement our polynomial model network:
# -*- coding: utf-8 -*- import torch import math # Create Tensors to hold input and outputs. x = torch.linspace(-math.pi, math.pi, 2000) y = torch.sin(x) # For this example, the output y is a linear function of (x, x^2, x^3), so # we can consider it as a linear layer neural network. Let's prepare the # tensor (x, x^2, x^3). p = torch.tensor([1, 2, 3]) xx = x.unsqueeze(-1).pow(p) # In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape # (3,), for this case, broadcasting semantics will apply to obtain a tensor # of shape (2000, 3) # Use the nn package to define our model as a sequence of layers. nn.Sequential # is a Module which contains other Modules, and applies them in sequence to # produce its output. The Linear Module computes output from input using a # linear function, and holds internal Tensors for its weight and bias. # The Flatten layer flatens the output of the linear layer to a 1D tensor, # to match the shape of `y`. model = torch.nn.Sequential( torch.nn.Linear(3, 1), torch.nn.Flatten(0, 1) ) # The nn package also contains definitions of popular loss functions; in this # case we will use Mean Squared Error (MSE) as our loss function. loss_fn = torch.nn.MSELoss(reduction='sum') learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y by passing x to the model. Module objects # override the __call__ operator so you can call them like functions. When # doing so you pass a Tensor of input data to the Module and it produces # a Tensor of output data. y_pred = model(xx) # Compute and print loss. We pass Tensors containing the predicted and true # values of y, and the loss function returns a Tensor containing the # loss. loss = loss_fn(y_pred, y) if t % 100 == 99: print(t, loss.item()) # Zero the gradients before running the backward pass. model.zero_grad() # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Tensors with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() # Update the weights using gradient descent. Each parameter is a Tensor, so # we can access its gradients like we did before. with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad # You can access the first layer of `model` like accessing the first item of a list linear_layer = model[0] # For linear layer, its parameters are stored as `weight` and `bias`. print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
99 1616.1385498046875 199 1081.325927734375 299 724.8551635742188 399 487.138671875 499 328.5332946777344 599 222.65386962890625 699 151.9323272705078 799 104.66616821289062 899 73.05643463134766 999 51.903411865234375 1099 37.738162994384766 1199 28.24566650390625 1299 21.879676818847656 1399 17.607154846191406 1499 14.73743724822998 1599 12.808303833007812 1699 11.510335922241211 1799 10.636259078979492 1899 10.047100067138672 1999 9.649618148803711 Result: y = 0.018086636438965797 + 0.8341416120529175 x + -0.0031202451791614294 x^2 + -0.09011583775281906 x^3
7.optim¶
So far, we have used torch no_ Grad () manually changes the tensor holding the learnable parameter to update the weight of the model. This is not a huge burden for simple optimization algorithms such as random gradient descent, but in practice, we often use more complex optimizers (such as AdaGrad, RMSProp, Adam, etc.) to train neural networks
The optim package in PyTorch abstracts the idea of optimization algorithms and provides the implementation of common optimization algorithms
In this example, we will use the nn package to define our model as before, but we will use the RMSprop algorithm provided by optim package to optimize the model:
# -*- coding: utf-8 -*- import torch import math # Create Tensors to hold input and outputs. x = torch.linspace(-math.pi, math.pi, 2000) y = torch.sin(x) # Prepare the input tensor (x, x^2, x^3). p = torch.tensor([1, 2, 3]) xx = x.unsqueeze(-1).pow(p) # Use the nn package to define our model and loss function. model = torch.nn.Sequential( torch.nn.Linear(3, 1), torch.nn.Flatten(0, 1) ) loss_fn = torch.nn.MSELoss(reduction='sum') # Use the optim package to define an Optimizer that will update the weights of # the model for us. Here we will use RMSprop; the optim package contains many other # optimization algorithms. The first argument to the RMSprop constructor tells the # optimizer which Tensors it should update. learning_rate = 1e-3 optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate) for t in range(2000): # Forward pass: compute predicted y by passing x to the model. y_pred = model(xx) # Compute and print loss. loss = loss_fn(y_pred, y) if t % 100 == 99: print(t, loss.item()) # Before the backward pass, use the optimizer object to zero all of the # gradients for the variables it will update (which are the learnable # weights of the model). This is because by default, gradients are # accumulated in buffers( i.e, not overwritten) whenever .backward() # is called. Checkout docs of torch.autograd.backward for more details. optimizer.zero_grad() # Backward pass: compute gradient of the loss with respect to model # parameters loss.backward() # Calling the step function on an Optimizer makes an update to its # parameters optimizer.step() linear_layer = model[0] print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
99 35157.9453125 199 17403.95703125 299 8047.3486328125 399 3556.0693359375 499 1893.0379638671875 599 1428.3843994140625 699 1216.1593017578125 799 1016.4676513671875 899 824.1697998046875 999 651.5247192382812 1099 504.0408935546875 1199 381.1749267578125 1299 279.7773742675781 1399 197.03244018554688 1499 131.4590301513672 1599 81.9945068359375 1699 47.25548553466797 1799 25.561182022094727 1899 14.26343822479248 1999 9.908321380615234 Result: y = -0.0006100632017478347 + 0.8261058926582336 x + -0.0006362820276990533 x^2 + -0.08855490386486053 x^3
8. Custom nn module ¶
Sometimes you will need to specify a more complex model than a series of existing modules. For these cases, you can subclass NN Module and define a forward to define its own module. The module uses other modules or other automatic conversion operations on Tensors to receive input Tensors and generate output Tensors
In this example, we implement the third-order polynomial as a custom Module subclass:
# -*- coding: utf-8 -*- import torch import math class Polynomial3(torch.nn.Module): def __init__(self): """ In the constructor we instantiate four parameters and assign them as member parameters. """ super().__init__() self.a = torch.nn.Parameter(torch.randn(())) self.b = torch.nn.Parameter(torch.randn(())) self.c = torch.nn.Parameter(torch.randn(())) self.d = torch.nn.Parameter(torch.randn(())) def forward(self, x): """ In the forward function we accept a Tensor of input data and we must return a Tensor of output data. We can use Modules defined in the constructor as well as arbitrary operators on Tensors. """ return self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3 def string(self): """ Just like any class in Python, you can also define custom method on PyTorch modules """ return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3' # Create Tensors to hold input and outputs. x = torch.linspace(-math.pi, math.pi, 2000) y = torch.sin(x) # Construct our model by instantiating the class defined above model = Polynomial3() # Construct our loss function and an Optimizer. The call to model.parameters() # in the SGD constructor will contain the learnable parameters of the nn.Linear # module which is members of the model. criterion = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=1e-6) for t in range(2000): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = criterion(y_pred, y) if t % 100 == 99: print(t, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() print(f'Result: {model.string()}')
99 1308.099365234375 199 910.4903564453125 299 635.1412353515625 399 444.26885986328125 499 311.8292541503906 599 219.847900390625 699 155.90786743164062 799 111.42127990722656 899 80.44319152832031 999 58.854122161865234 1099 43.796546936035156 1199 33.28647232055664 1299 25.9451961517334 1399 20.813724517822266 1499 17.22447967529297 1599 14.712332725524902 1699 12.953007698059082 1799 11.720170974731445 1899 10.855795860290527 1999 10.249417304992676 Result: y = -0.037933703511953354 + 0.8685634732246399 x + 0.006544196978211403 x^2 + -0.09501205384731293 x^3
9. Control flow and weight sharing ¶
As an example of dynamic graph and weight sharing, we implement a very strange model: a third-order polynomial, select a random number between 3 and 5 in each forward propagation, and use this order to repeatedly calculate the fourth and fifth orders with the same weight multiple times
For this model, we can use the conventional Python flow control to realize the loop, and we can realize the weight sharing by simply reusing the same parameters many times when defining the forward propagation
We can easily implement this model as a subclass of Module:
# -*- coding: utf-8 -*- import random import torch import math class DynamicNet(torch.nn.Module): def __init__(self): """ In the constructor we instantiate five parameters and assign them as members. """ super().__init__() self.a = torch.nn.Parameter(torch.randn(())) self.b = torch.nn.Parameter(torch.randn(())) self.c = torch.nn.Parameter(torch.randn(())) self.d = torch.nn.Parameter(torch.randn(())) self.e = torch.nn.Parameter(torch.randn(())) def forward(self, x): """ For the forward pass of the model, we randomly choose either 4, 5 and reuse the e parameter to compute the contribution of these orders. Since each forward pass builds a dynamic computation graph, we can use normal Python control-flow operators like loops or conditional statements when defining the forward pass of the model. Here we also see that it is perfectly safe to reuse the same parameter many times when defining a computational graph. """ y = self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3 for exp in range(4, random.randint(4, 6)): y = y + self.e * x ** exp return y def string(self): """ Just like any class in Python, you can also define custom method on PyTorch modules """ return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3 + {self.e.item()} x^4 ? + {self.e.item()} x^5 ?' # Create Tensors to hold input and outputs. x = torch.linspace(-math.pi, math.pi, 2000) y = torch.sin(x) # Construct our model by instantiating the class defined above model = DynamicNet() # Construct our loss function and an Optimizer. Training this strange model with # vanilla stochastic gradient descent is tough, so we use momentum criterion = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=1e-8, momentum=0.9) for t in range(30000): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = criterion(y_pred, y) if t % 2000 == 1999: print(t, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() print(f'Result: {model.string()}')
1999 4006.545166015625 3999 1704.7608642578125 5999 790.6707763671875 7999 356.1207275390625 9999 161.2637481689453 11999 74.07376098632812 13999 38.51034164428711 15999 21.840112686157227 17999 14.520430564880371 19999 11.310136795043945 21999 9.908504486083984 23999 9.299556732177734 25999 8.865952491760254 27999 8.918462753295898 29999 8.871894836425781 Result: y = 0.0011926052393391728 + 0.8512187004089355 x + -0.0006726108840666711 x^2 + -0.09276143461465836 x^3 + 0.00010200009273830801 x^4 ? + 0.00010200009273830801 x^5 ?