Pytrch deep learning and graph Neural Network Practice Series_ 04 Pytorch quick start

Posted by schme16 on Tue, 01 Mar 2022 14:19:48 +0100

1 tensor data operation

1.1 torch.reshape() implements data dimension change

import torch
a = torch.tensor([[1,2],[3,4]])
print(torch.reshape(a,(1,-1))) # It is transformed into a tensor with only one row of data, and the parameter - 1 indicates automatic calculation
# tensor([[1, 2, 3, 4]])
print(a.reshape((1,-1))) # # It is transformed into a tensor with only one row of data, and the parameter - 1 indicates automatic calculation
# tensor([[1, 2, 3, 4]])
print(a.view((1,-1))) # # It is transformed into a tensor with only one row of data, and the parameter - 1 indicates automatic calculation
# tensor([[1, 2, 3, 4]])

1.2 matrix transposition of tensor data

import torch
b = torch.tensor([[5,6,7],[2,8,0]]) # Define two-dimensional tensor
print(torch.t(b)) # Transpose matrix
# Output tensor([[5, 2],
#         [6, 8],
#         [7, 0]])
print(torch.transpose(b,dim0=1,dim1=0)) # Transpose the matrix to switch the 1st dimension of the original data to the 0th dimension
# Output tensor([[5, 2],
#         [6, 8],
#         [7, 0]])
print(b.permute(1,0)) # Transpose the matrix to switch the 1st dimension of the original data to the 0th dimension
# Output tensor([[5, 2],
#         [6, 8],
#         [7, 0]])

1.3 view() and contiguous() methods

1.3.1 general

view() can only act on tensors on the whole block of memory. If tensors on discontinuous memory can not be processed with this function. It is also impossible to change the tensor after the change of transfer() and permute().

view() needs to be used in conjunction with continguous () to ensure that the tensor is in the same memory block.

1.3.2 code

import torch
b = torch.tensor([[5,6,7],[2,8,0]]) #Define two-dimensional tensor
print(b.is_contiguous()) #Judge whether the memory is continuous
# Output True
c = b.transpose(0,1)
print(c.is_contiguous()) #Judge whether the memory is continuous
# Output False
print(c.contiguous().is_contiguous()) #Judge whether the memory is continuous
# Output True
print(c.contiguous().view(-1)) #Judge whether the memory is continuous
# Output tensor([5, 2, 6, 8, 7, 0])

1.4 torch.cat() data splicing function

1.4.1 general

torch.cat() function will realize the splicing of two tensors along the specified direction = = ", which is common in neural networks

1.4.2 code

import torch
a = torch.tensor([[1,2],[3,4]]) #Define two-dimensional tensor
b = torch.tensor([[5,6],[7,8]])

print(torch.cat([a,b],dim=0)) #Connect along the 0 dimension
# Output tensor([[1, 2],
#         [3, 4],
#         [5, 6],
#         [7, 8]])
print(torch.cat([a,b],dim=1)) #Connect along dimension 1
# Output tensor([[1, 2, 5, 6],
#         [3, 4, 7, 8]])

1.5 torch.chunk() to achieve uniform data segmentation

1.5.1 general

torch.chunk() splits a multidimensional tensor according to the specified dimension and split quantity. Its return value is tuple and cannot be modified.

1.5.2 code

import torch
a = torch.tensor([[1,2],[3,4]])

print(torch.chunk(a,chunks=2,dim=0)) #The tensor a is divided into two parts according to the 0th dimension
# Output (tensor([[1, 2]]), tensor([[3, 4]]))
print(torch.chunk(a,chunks=2,dim=1)) #The tensor a is divided into two parts according to the first dimension
# Output (tensor([[1],[3]]), tensor([[2],[4]])

1.6 torch.split() to achieve uneven segmentation of data

import torch
b = torch.tensor([[5,6,7],[2,8,0]])
#Divided into 2 parts according to the first dimension
### split_size_or_sections will split the tensor data according to the specified number of elements, and the remaining data that does not meet the number will be the last part of the split data
print(torch.split(b,split_size_or_sections=(1,2),dim=1) )
# Output (tensor([[5],[2]]), tensor([[6, 7],[8, 0]])

1.7 torch.gather() retrieves tensor data

1.7.1 general

torch.gather() arranges the values in the tensor data according to the specified index and order. The index parameter must be of tensor type and the same as the entered dimension

1.7.2 code

import torch
b = torch.tensor([[5,6,7],[2,8,0]])
# Along the first dimension, the values are arranged according to the shape of the index
print(torch.gather(b,dim=1,index=torch.tensor([[1,0],[1,2]])))
#Output tensor([[6, 5],[8, 0]])

# Along the 0-th dimension, the values are arranged according to the shape of the index
print(torch.gather(b,dim=0,index=torch.tensor([[1,0,0]])))
#Output tensor([[2, 6, 7]])

print(torch.index_select(b,dim=0,index=torch.tensor(1))) #Take out the whole row or column
#Output tensor([[2, 8, 0]])

1.8 filter and display tensor data according to the specified threshold

1.8.1 general

torch.gt(): greater than

torch.ge(): greater than or equal to

torch.lt(): less than

torch.le(): less than or equal to

1.8.2 code

import torch
b = torch.tensor([[1,2],[2,8]])
mask = b.ge(2) #Greater than or equal to 2
print(mask)
# Output tensor ([[false, true],
#         [ True,  True]])
print(torch.masked_select(b,mask))
# Output tensor([2, 2, 8])

1.9 find the index of non-zero value in tensor

import torch
eye = torch.eye(3) # Generate a diagonal matrix
print(eye)
# Output tensor([[1., 0., 0.],
#         [0., 1., 0.],
#         [0., 0., 1.]])
print(torch.nonzero(eye)) # Find the non-zero index in the diagonal matrix
# Output tensor([[0, 0],
#         [1, 1],
#         [2, 2]])

1.10 realize the numerical value of tensor according to conditions

import torch
b = torch.tensor([[5,6,7],[2,8,0]])
c = torch.ones_like(b) #Generate a matrix with values of 1
print(c)
# Output tensor([[1, 1, 1],
#           [1, 1, 1]])
print(torch.where(b>5,b,c)) #Extract the elements greater than 5 in b, and the part with value less than 5 is obtained from c
# Output tensor([[1, 6, 7],
#            [1, 8, 1]])

1.11 data truncation according to threshold

1.11.1 general

Data truncation according to the threshold = = "used in gradient calculation, set a fixed threshold for the gradient to avoid gradient explosion in the training process.

Gradient explosion: the adjustment value of each training of the model becomes large, which eventually makes the training results difficult to converge.

1.11.2 code

import torch
a = torch.tensor([[1,2],[3,4]])
b = torch.clamp(a,min=2,max=3) #Truncate according to the minimum value of 2 and the maximum value of 3
print(b)
# Output tensor([[2, 2],
#              [3, 3]])

1.12 get the maximum and minimum indexes in the data

1.12.1 general

torch.argmax(): returns the maximum index

torch.argmin(): returns the minimum index

1.12.2 code

import torch
a = torch.tensor([[1,2],[3,4]])
print(torch.argmax(a,dim=0)) # Find the maximum index value according to dimension 0
# Output tensor([1, 1])
print(torch.argmin(a,dim=1)) # Find the minimum index value according to the first dimension
# Output tensor([0, 0])
print(torch.max(a,dim=0)) # Find the maximum index value and corresponding value according to dimension 0
# Output torch return_ types. max(values=tensor([3, 4]),indices=tensor([1, 1]))
print(torch.min(a,dim=1)) # Find the minimum index value and corresponding value according to the first dimension
# Output torch return_ types. min(values=tensor([1, 3]),indices=tensor([0, 0]))

Topics: neural networks Pytorch Deep Learning