Summary of work training intelligent trash can 1

Posted by rdoylelmt on Mon, 27 Dec 2021 14:12:56 +0100

Summary 1

I was very lucky to participate in the 2021 engineering training provincial competition, which was the first provincial competition since my university. Although I didn't enter the national competition, I was quite satisfied with my achievements. The following are some problems we encountered during the preparation and our own solutions. I am Xiaobai. If there is anything wrong, please correct me.

code

Dataset training

The data set is trained on Baidu PaddlePaddle, and it can train its own dataset with the slight modification of the official five flowers.
Five flower course:
Link: link.

The training process is as follows:
1. Delete the compressed package in data/data2815 and add your own dataset. (several types of files and several folders are not a large file. Please refer to the data set of five types of flowers. It is better to name the picture without brackets)
2. Then change the path in the program. The official program is:

# Decompress the flower dataset
!cd data/data2815 && unzip -q flower_photos.zip

Add flower_photos.zip to the name of the file you uploaded zip can complete the decompression of the data set.

!cd data/data2815 && unzip -q garbage.zip  /Decompress dataset

In the following code, change the path to 'data/data2815 /' to 'data/data2815/garbage' (garbage is the file generated after decompressing the dataset compression package, which will be changed according to your dataset name).
If the path does not change, the result of the preprocessing code is a large class, and the graphics cannot be classified.
Operation results:

['garbage']

The running result of 'data/data2815/garbage' is

Running time: 309 millisecond
['barry', 'can', 'kiwa', 'smoke', 'zhuank']

At this time, the identification category of the program is 5 categories, which are respectively the above categories.
3. Then data set training (the longest part of the program, more than 600 lines)

if __name__ == '__main__':
    init_log_config()
    init_train_parameters()
    train()

The following is the completion of the training:

2021-08-07 17:33:35,970 - <ipython-input-1-b7bca58bf88e>[line:635] - INFO: end training
2021-08-07 17:33:35,971-INFO: training till last epcho, end training
2021-08-07 17:33:35,971 - <ipython-input-1-b7bca58bf88e>[line:649] - INFO: training till last epcho, end training

If the code starts training, report one... The error of expect one is usually caused by two problems:
a. There is more than one picture in a small category in the dataset. For example, there are several mineral water bottles in a picture. There can't be too many such pictures. If there is more, report an error. Solution: simplify the data set, there is only one thing.
b. You changed the dataset during training, but the intermediate on the left was not cleared. Solution: delete the previous, click to open a new project and start again.

4. After that, the test set predicts the model

if __name__ == '__main__':    
    eval_all()  

The run result will have a prediction accuracy, which will determine the size of parameters in future codes.

Run time: 18 Seconds 244 milliseconds
 End time: 2021-08-07 17:42:41
total eval count:292 cost time:18.15 sec predict accuracy:0.7191780821917808

The prediction accuracy of the model is 0.719178082917808. Generally, if the value is less than 0.6, the model will not have recognition effect. It is best to update the data set and retrain. I tried it myself, but the effect is not good.

Then the most difficult part is solved, and the rest is to change the path step by step, execute and wait for generation kmodel file.
Do not forget to change the path of the following code:

Model compression. We need to quantify. In order to ensure the accuracy after quantization, You need to use the training picture to adjust the model. Copy evaluation picture to/home/aistudio/work/images
#!mkdir /home/aistudio/work/images
#!cp -rf /home/aistudio/data/data2815/evalImageSet/*  /home/aistudio/work/images/
import os
import shutil
!mkdir /home/aistudio/work/images
filenames = os.listdir("/home/aistudio/data/data2815/garbage/evalImageSet/")
 
index = 0
for i in range(1, len(filenames), 7):
    srcFile = os.path.join("/home/aistudio/data/data2815/garbage/evalImageSet/", filenames[index])
    targetFile = os.path.join("/home/aistudio/work/images",filenames[index])
    shutil.copyfile(srcFile,targetFile)
    index += 7

filenames = os.listdir("/home/aistudio/data/data2815/garbage/evalImageSet/")
srcFile = os.path.join("/home/aistudio/data/data2815/garbage/evalImageSet/", filenames[index])

Topics: Deep Learning