Preface:
After thinking about it carefully, I originally wanted to open another topic to continue the next content, but I thought that the step of visual ai model continues on the basis of the previous three articles, so I don't intend to open another topic here.
If you don't know the previous content, or want to review the previous content again, you can click the following link to view it according to the serial number.
You may also want to have your own AI model file format - (1)https://blog.csdn.net/Pengcode/article/details/121754272You may also want to have your own AI model file format - (2)https://blog.csdn.net/Pengcode/article/details/121776674You may also want to have your own AI model file format - (3)https://blog.csdn.net/Pengcode/article/details/121843704
Next goal - (visual ai model)
In the first three articles, we successfully defined our own AI model file format. At the same time, in the third article, we also successfully generated the first model file, which is completely customized in the real sense. It is no exaggeration to say that congratulations on successfully defining a set of models that can be called onnx Mini.
However, I always feel that it is not perfect. Remember how we checked the information inside the model last time? We use the flatc tool to parse the model into json text, and then open the json text to view the internal information, which is passed out to let others know. It feels so shameless!! As soon as we think that others' Cafe, Tensorflow and pytoch can be visualized, our special ai model is bound to win the threshold of visualization!!
Therefore, it is decided that the next goal is to visualize our own ai model in netron. We need to compile a netron that fits our model!!
Or do not say more, open dry!!!
netron visual custom model format file
Before, I used flatbuffers to describe the data structure of the user-defined model file, and got the model with the suffix PzkM. The corresponding C + + code is written to generate the first model file based on the custom format. Last time, flatc is used to parse the binary file into json text file, which is still not perfect. Therefore, this time, we intend to make our custom model file adapt to netron to achieve the purpose of visualization.
This plan has been carried out successively since I compiled netron. It took a lot of time. Therefore, the goal is to adapt the user-defined model, and I used the flatbuffers tool for better reference, I take the tflite of tensorflow lite, which also uses the flatbuffers tool to describe the model file format, as the main research object, trying to speed up the development speed.
1, Interpretation of netron adaptation source code
At first, I was at a loss to see such huge js code. This time, the direct chrome development tool is used to debug and track a button of netron, that is, the entry is index HTML file, focusing on a button tag with id = "open file button"; The overall process of button binding is as follows:
(1) After the user opens a model file, js obtains the model file information (path, file name, file suffix, as if it was also read as binary code);
(2) js determines which framework the model file corresponds to according to the model file information and certain judgment logic. For example, if the. Tflite model file corresponds to tensorflow lite, then it will call the class member function require(id) to obtain the framework named tflite js source code will be exported to tflite js through "module exports. ModelFactory = tflite. ModelFactory;” This js code obtains the model adaptation interface;
(3) and then invoked ModelFactory.match(context), ModelFactory.open(context,match) interfaces realize the adaptation of tflite model files;
(4) Finally, netron calls the information after. open() to draw the visual image of the model (the source code of this part is not read, and will be interpreted later)
2, Interpretation of tflite's schema design idea
Tflite also has a schema file, which is similar to our customized model file. Therefore, if you want to read the adaptation source code of tflite, it is also very important to read its schmea file.
From the design idea of tflte, the main directed graph connection information is mainly reflected in ops, which also contains input ID and output ID information, so the connection information can be obtained.
3, The use of flatbuffes in javascript
Because the programming language of netron is javascript, the thorny problem encountered at present is that the latest version of flatbuffers has abandoned its support for javascript and needs to follow the route of schema - > typescript - > javascript. However, there is no corresponding update to the javascript tutorial on the official website of flatbuffers, and the typescript code generated by schema - > typescript is very different. Therefore, it is not recommended to follow the javascript tutorial on the official website of flatbuffers (because the tutorial on the official website cannot be completely reproduced).
Through a grope, I finally know how to use the flatbuffers tool in javascript. I will continue with the previous schema file to explain how to read the model we generated earlier in javascript, that is, the first model file to parse our special ai model.
3.1. Compile the schema file as typescript using flatc
#/bin/bash # cd to the directory where the schema file is stored $ cd schema-path # Use the flatc compiler to compile the schema file as typescript $ flatc --ts pzk-schema.fbs # The folder of pzk model and pzk schema will be generated in the current directory TS file $ ls #pzk-model pzk-schema.ts pzk-schema.fbs
3.2. Install dependent libraries and tools
This part mainly includes two parts: one part is that the use of flatbuffers in javascript requires the installation of dependency libraries in node; The second step is to convert typescript to javascript. This step requires the use of tsc, a code conversion tool.
The relevant installation commands are as follows:
#/bin/bash # Global installation of flatbuffer dependency libraries on javascript $ npm install -g flatbuffers # Global install tsc tool $ npm install -g typescript # The above libraries can not be installed in the global environment. When you want to use them only in the current project, you only need to remove the - g parameter; It is strongly recommended that you install it in a global environment, so it is easier to use the tsc command
3.3. Typescript - > JavaScript operation
This part is very important, because from this part, you can't get through according to the tutorial on the official website. At the same time, if your own custom model is inconsistent, you need to modify it in combination with my next operation.
3.3. 1. Step 1: find the main typescript file to be converted
Because when the target of the flatc compiler is typescript, it will generate multiple typescript code files. As shown above, the schema I defined earlier generates a pzk schema Ts and a pzk model folder. There are multiple typescript files in this pzk model folder, as shown below:
Serial number | typescript file name | Correspondence with schema file |
1 | attr-meta.ts | table AttrMeta |
2 | attributes.ts | table Attributes |
3 | connect.ts | table Connect |
4 | data-layout.ts | enum DataLayout |
5 | data-type.ts | enum DataType |
6 | layer.ts | table Layer |
7 | p-model.ts | table PModel |
8 | tensor-shape.ts | enum DataLayout |
9 | tensor-type.ts | enum TensorType |
10 | tensor.ts | table Tensor |
11 | time.ts | table time |
12 | weights.ts | table Weights |
Therefore, we can easily see that flatc encapsulates all the data structures we define into multiple typescript files, and the names of these typescript files are very regular, that is, they are converted according to the hump naming method. For example, there is a data structure called axxxxbxxxcxxx in my schema, so when flatc is compiled into typescript, Then it will produce a file called Axxxx bxxxx Cxxx TS file.
Therefore, after knowing this relationship, we know that it is time to convert the typescript file. Yes, it's P-Model TS is our main program file because PModel is our root data type in our defined schema file. (according to the instructions on the official website of flatbuffers, the outermost pzk-schema.ts is the main program file, but if you convert according to this idea, you will find that some data structures have not been converted into javascript).
Therefore, we will proceed as follows.
3.3. 2. Step 2: formally convert typescript - > JavaScript
In the previous step, we know the main program file and will use the following conversion command:
#/bin/bash # Using the tsc conversion tool $ tsc pzk-model/p-model.ts # Clear unnecessary typescript files $ find . -name "*.ts" |xargs rm -rfv
So far, if the operation is correct, we will get the following js file:
#/bin/bash $ ls pzk-model :<<! #Get the following javascript code file attributes.js connect.js data-type.js p-model.js tensor-shape.js time.js attr-meta.js data-layout.js layer.js tensor.js tensor-type.js weights.js !
3.4. Actually using flatbuffers in javascript
Here, the javascript code generated in the previous step will be used to actually parse and read the first model file of the special ai module.
Create a new javascript file, which I call try flatc JS, and its internal code is as follows:
const fs = require('fs'); var flatbuffers = require('flatbuffers'); var PModel = require('./pzk-model/p-model').PModel; var builder = new flatbuffers.Builder(1024); var author = builder.createString("name"); var version = builder.createString("v1.0"); var modelname = builder.createString("what fuck"); console.log("try use flatbuffer"); console.log(author == undefined); console.log("try load custom model file"); // Read binary model file var bytes = new Uint8Array(fs.readFileSync('/home/pengzhikang/project/custom-model/build/release/first.PZKM')); // Parsing binaries using the flatbuffers interface var buf = new flatbuffers.ByteBuffer(bytes); // Get our PModel data structure from the parsed buf var mymodel = PModel.getRootAsPModel(buf); // Get the author information of the model console.log("author is " + mymodel.author()); // Gets the version number of the model console.log("model version is " + mymodel.version()); // Gets the name of the model console.log("model name is " + mymodel.modelName()); // Gets the creation time of the model var model_time = mymodel.createTime(); // Print time information console.log("mode create time is " + model_time.year() + "/" + model_time.month() + "/" + model_time.day() + " " + model_time.hour() + ":" + model_time.min() + ":" + model_time.sec()); // The following is the operation of reading the list from the model file var inputid_array = mymodel.modelRuntimeInputIdArray(); console.log("model runtime input id list is above:"); for(x in inputid_array) { console.log(x + ","); } var tensor_array = mymodel.tensorBuffer(); console.log("model tensor length is "+ mymodel.tensorBufferLength()); for (var i = 0; i < mymodel.tensorBufferLength(); i++) { console.log(mymodel.tensorBuffer(i).id() + ":" + mymodel.tensorBuffer(i).name()); }
Run try flatc JS file:
#/bin/bash $ node try-flatc.js :<<! # It will print the model information as follows: author, version number, creation time, tensor information, etc try use flatbuffer false try load custom model file author is pengzhikang model version is v2.1 model name is holly-model mode create time is 2021/12/9 23:38:53 model runtime input id list is above: 0, model tensor length is 4 0:model_input_0 1:tensor_1 2:tensor_2 3:tensor_3 !
So far, we have finished parsing the first model file instance of the special ai module in javascript.
Limited to the control of space and the fact that I am writing while developing, so it is a little random and the length of the article is different. Please forgive me and put forward your opinions in time. If you are interested in developing together, you are welcome!