You may also want to have your own AI model file format-

Posted by khr2003 on Fri, 10 Dec 2021 11:23:41 +0100

If the reader has not read the first two articles or wants to review them again, the following is the corresponding link:

You may also want to have your own AI model file format - (1)https://blog.csdn.net/Pengcode/article/details/121754272You may also want to have your own AI model file format - (2)https://blog.csdn.net/Pengcode/article/details/121776674 The first two articles describe the original intention of making our own AI model file format, write down our requirements, and make corresponding plans; After that, we started to implement the corresponding plan in the first article. At present, we have prepared the environment, model files and data protocols (compiled the schema file with Flatbuffers and completed the definition of data structure defining the special AI model).

Well, this article begins to enter the more formal coding stage. I will explain the whole coding process in detail as much as possible. If you want to explain the purpose of this article, in fact, the purpose of this time is to use the previous work to write the corresponding code to generate the first model of our special ai module, that is, this time will generate a truly self-defined model file.

Then start working!

1, Overall planning

You know, "generate the first model of our special ai module", which is more than just writing it down in code. After all, we may have to predict that in a few years, our special ai model may become the status of Caffe today (hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

nameCorresponding meaning
Interface friendlinessThe written program interface needs to be easy to understand, and the transmission and output of parameters should be as simple as possible. It should not only be able to understand by yourself, but also be easy for others to use.
ConvenienceThe main goal is to build a model file with less code.
ExpansibilityIt is necessary to provide a very general interface so that the different needs of different people can be met.
NormativeToo flexible interfaces will lead to more non-standard usage; in order to standardize, we need to write more judgment logic in the provided interfaces.

Now that we have determined the above objectives, we naturally have some ideas about the next work. Combined with the work of the previous article, we have formulated the following workflow:

Serial numberplanobjective
1Use FlatBuffers to compile the schema file written last time and generate codeGet the programming interface of the special ai module of the target platform
2Write json file specification and describe a personalized network layer descriptionjson is used to standardize the network layer description, taking into account the requirements of flexibility and standardization
3Find the way to read json files and download the corresponding libraries from the networkThe json written in step 2 can be parsed and used in the code
4Formal codingBased on the interfaces and libraries obtained in the above steps, the construction interface of special ai module is formally written
5Using the interface written in step 4, write an example to build our first model fileVerify that the coding in step 4 is correct

2, Preparation before formal code writing

In order to make the space layout reasonable, the first three steps in the workflow are mainly carried out here. These three processes are relatively easy and can be read at ease.

2.1. Use FlatBuffers to generate code for schema

The schema file has been defined in the previous article. In fact, we only need to know how to use the Flatc tool (Flatc tool is the main executable program after FlatBuffers is installed).

The schema file defined by the data exchange protocol for ai module defined in the previous article is saved as a file named pzk-schema.fbs in the current directory model flatbuffer, as shown below:

$ ls model-flatbuffer
pzk-schema.fbs

In order to save the code generated by flatc according to pzk-schema.fbs, I created the include folder:

$ mkdir include

Use the flatc tool to generate code according to the instructions shown below. You will see the generated code under include, which is named pzk schema_generated. H, as shown below:

$ flatc -c -o include/  model-flatbuffer/pzk-schema.fbs
$ ls include/
pzk-schema_generated.h

2.2. Write json standardization documents

I also wondered why I used json to constrain the model construction of the special ai module, because according to the schema description, any network layer and any parameters can be directly written into the model, whether outrageous or not, regardless of the complexity. But later on, I thought that this flexibility is cumbersome for users, because it exists a lot of time Situation: before, the user can set this layer from the most basic attributes and wrote a large section of code to describe the operation of this layer. This time, it is used up, but later, the user still wants to build this type of network layer. I don't think the user will want to build all attributes again (because that experience is very bad).

I want to make the user experience better. This is my original intention to write json specification documents and is also an important focus of my later coding. Therefore, I created the following json specification documents. In order to shorten the length, I just showed the description of volume layer, as shown below:

[
    // There are other layer descriptions before
    /* <---------Start of meta description of convolution layer ----------- > */
    {
      //The value corresponding to "name" indicates the type of layer. Here is convolution
      "name": "Convolution2dLayer", 
      //The value corresponding to "category" indicates the type described in this paragraph. Here, it indicates that it describes the layer composition
      "category": "Layer",
      //"Attributes" corresponds to a list indicating the attributes owned by this type of layer and the data type corresponding to the attribute
      "attributes": [
        { "name": "padTop", "type": "uint32" },
        { "name": "padRight", "type": "uint32" },
        { "name": "padBottom", "type": "uint32" },
        { "name": "padLeft", "type": "uint32" },
        { "name": "strideX", "type": "uint32" },
        { "name": "strideY", "type": "uint32" },
        { "name": "dilationX", "type": "uint32" },
        { "name": "dilationY", "type": "uint32" },
        { "name": "dataLayout", "type": "DataLayout" }
      ],
      //The value corresponding to "inputs" is a list. The elements in the list represent the inputs of the network layer and express the corresponding meaning
      // Here are input, weight and offset 
      "inputs": [
        { "name": "input" },
        { "name": "weights" },
        { "name": "biases" }
      ]
    },
    /* <---------End of meta description of convolution layer ----------- > */
    {
      "name": "AdditionLayer",
      "inputs": [
        { "name": "A" },
        { "name": "B" }
      ],
      "outputs": [
        { "name": "C" }
      ]
    },
    // There are other layer descriptions later
    // ......
]

According to the annotation and convolution examples in JSON above, we can add more meta descriptions of different types of network layers in JSON. Then, I saved the prepared JSON file as pzk-metadata.json file and placed it in the model flatbuffer folder:

$ ls model-flatbuffer
pzk-metadata.json  pzk-schema.fbs

2.3. Configure relevant libraries for json parsing

The programming language of our target platform is C + +, so we need to find a library that can parse JSON files in C + +, and it is better to rely on fewer libraries. Therefore, through Internet search, I located an open source project named json11, whose address is as follows:

json 11, a lightweight json parsing C + + Libraryhttps://github.com/dropbox/json11 We can configure it into our project through the following commands:

$ mkdir 3rdparty
$ cd 3rdparty
$ git clone https://github.com/dropbox/json11.git
$ cp json11_master/json11.hpp ../include
$ mkdir -p ../src
$ cp json11_master/json11.cpp ../src
$ cd ..

3, Formal coding

This step is a time-consuming step, mainly because it needs to consider the universality and user friendliness of the interface. Generally speaking, if there is a large amount of code and the hierarchy is clear, a top-down, global to local approach will be adopted. However, I personally prefer a bottom-up, local to overall development approach. Because I personally think that if you build a network from the perspective of users, you first need to build the input, and then build the network layer; The construction of network layer requires the construction of attribute description; Finally, set the output. In terms of instantiating a type of network layer, it seems that this approach from local to overall is more in line with the coding method of writing API for special ai module.

3.1. json specification class preparation

From part to whole, we also write json specification classes in this way. Since the whole in the json file is a list and the list elements are meta descriptions of different types of network layers, we first build a separate meta description named min_meta, one min_meta contains a type of meta description of the network layer, from min_ From the member variables of meta, we can see that it is a reconstruction of a meta description of json:

// one layer describe from json file
class min_meta
{
private:
    /* data */
public:
    min_meta(){};
    min_meta(json11::Json onelayer);
    ~min_meta();
    void print();
    std::string name; //The name of the network type, such as "revolution 2dlayer"
    std::string category; //It belongs to category, generally "Layer"
    /* Attribute dictionary, such as:
    {"padTop":"uint32", "padRight":"uint32"}
    */
    std::map<std::string, std::string> attributes;
    std::vector<std::string> inputs;
    std::vector<std::string> outputs;
    std::string nkey = "name";
    std::string ckey = "category";
    std::string akey = "attributes";
    std::string ikey = "inputs";
    std::string okey = "outputs";
};

From the part to the whole, then about the json file, we put min_meta is treated as a list element and encapsulated into a jsonmeta class, which represents the class corresponding to the json normalization file in C + +, as shown below:

// for describe the json file to class
class jsonmeta
{
private:
    void _getinfo();
public:
    jsonmeta(){};
    jsonmeta(json11::Json jmeta);
    ~jsonmeta();
    void printinfo();
    bool has_layer(std::string);
    min_meta get_meta(std::string layer);
    void updata(json11::Json jmeta);
    std::vector<min_meta> meta; //The meta description list contains meta descriptions of all different types of network layers
    std::map<std::string, size_t> laycategory;
    std::vector<std::string> layname;
};

There is a gap between jsonmeta and flat buffers, and we can't get through min in jsonmeta yet_ Meta to standardize the instantiation of network layer, so we specially created layer_ The maker class, with its name, is mainly used to build a network layer, that is, instantiate a network layer. Users can_ The interface of maker sets the attributes, weights and output information of the network layer, as shown below:

// for build the layer
class layer_maker
{
private:
    /* data */
public:
    layer_maker();
    layer_maker(min_meta layer_meta, uint32_t layerid, std::string layername);
    ~layer_maker();
    bool add_input(uint32_t id, std::string input_name = "");    
    bool add_output(uint32_t id, std::string output_name = "" , bool force_set = true);
    bool add_attr(std::string key, std::vector<uint8_t> buf);
    static DataType string2datatype(std::string a);
    static std::vector<uint32_t> return_id(std::vector<Conn> a);
    min_meta meta_info;
    uint32_t layer_id;
    std::string type;
    std::string name;
    uint8_t input_num = 0;
    uint8_t output_num = 0;
    bool require_attrs = false;
    std::vector<struct Conn> input_id;
    std::vector<struct Conn> output_id;
    struct Attrs attrs;
};

3.2. Model instantiation builder writing

To be critical, the model instantiation builder is the most critical part. The function of the builder is to integrate the parsing and maker of Json normalized files_ Layer this layer builder; Docked schema The model protocol described by FBS provides the function of generating and saving models.

Therefore, my model instantiation builder is named PzkM, and its main components are as follows:

// main class for build pzkmodel
class PzkM
{
private:
    /* data */
public:
    PzkM();
    PzkM(std::string jsonfile);
    ~PzkM();

    void add_info(std::string author="pzk", std::string version="v1.0", std::string model_name="Model");
    void create_time();
    uint32_t layout_len(DataLayout layout);
    std::vector<uint32_t> remark_dims(std::vector<uint32_t> dims, DataLayout layout);
    uint32_t add_input(std::vector<uint32_t> dims, DataLayout layout = DataLayout_NCHW, DataType datatype = DataType_FP32);
    uint32_t add_tensor(std::vector<uint32_t> dims, std::vector<uint8_t> weight, DataLayout layout = DataLayout_NCHW ,TensorType tensor_type = TensorType_CONST , DataType datatype = DataType_FP32);
    bool add_layer(layer_maker layerm);
    bool set_as_output(uint32_t id);
    bool has_tensor(uint32_t id);
    bool has_layer(uint32_t id);
    bool model2file(std::string filepath);
    layer_maker make_empty_layer(std::string layertype, std::string layername = "");
    static uint32_t datatype_len(DataType datatype);
    static json11::Json ReadJson(std::string file);
    static uint32_t shape2size(std::vector<uint32_t> dims);
    jsonmeta meta;
    std::string author;
    std::string version;
    std::string model_name;
    struct tm * target_time;
    uint32_t model_runtime_input_num = 0;
    uint32_t model_runtime_output_num = 0;
    std::vector<uint32_t> model_runtime_input_id;
    std::vector<uint32_t> model_runtime_output_id;
    std::vector<flatbuffers::Offset<Tensor>> tensors;
    std::vector<uint32_t> all_tensor_id;
    std::vector<flatbuffers::Offset<Layer>> layers;
    std::vector<uint32_t> all_layer_id;
    flatbuffers::FlatBufferBuilder builder;
    uint32_t tensor_id = 0;
    uint32_t layer_id = 0;
    
};

Through this PzkM class, json and schema are connected, and the ability to build and save user models is realized. Finally, all the above implementations and declarations are saved in pzk HPP, placed in the include directory:

$ ls include
json11.hpp  pzk.hpp  pzk-schema_generated.h

4, The first model to build a special ai model

4.1 example preparation

The following example constructs a network with only one layer of convolution layer, and all steps and comments are marked in the code:

#include "pzk.hpp"
#include <iostream>

std::vector<float> rand_weight(uint32_t num=100)
{
    srand(num);
    std::vector<float> weight;
    for (size_t i = 0; i < num; i++)
    {
        weight.push_back( ((rand() % 10) - 4.5f) / 4.5f);
    }
    return weight;
}
template<class T>
std::vector<uint8_t> fp2ubyte(std::vector<T> w1)
{
    std::vector<uint8_t> buf;
    for (size_t i = 0; i < w1.size(); i++)
    {
        T* one = &w1[i];
        uint8_t* charp = reinterpret_cast<uint8_t*>(one); 
        buf.push_back(charp[0]);
        buf.push_back(charp[1]);
        buf.push_back(charp[2]);
        buf.push_back(charp[3]);
    }
    return buf;
    
}


int main(int argc, char **argv) {
    if(argc == 3 && argv[1] == std::string("--json"))
    {
        // 1. Initialize the model instantiation builder with json file for normalization
        PzkM smodel(argv[2]);
        //PzkM smodel("/home/pack/custom-model/model-flatbuffer/pzk-metadata.json");
        // 2. Add the attached information of the model, such as name and model version number, and increase the model creation time
        smodel.add_info("pengzhikang", "v2.1", "holly-model");
        smodel.create_time();
        // 3. Create an input for the model
        std::vector<uint32_t> input_dims = {1,3,416,416};
        uint32_t input_id = smodel.add_input(input_dims);
        // 4. Instantiate a convolution layer for the model and add it to the model
        // 4.1 get layer_maker, that is, the layer instantiation builder
        layer_maker l = smodel.make_empty_layer("\"Convolution2dLayer\"", "conv2d-index-1");
        // 4.2 adding weights to this convolution layer
        std::vector<float> org_weight  = rand_weight(10*3*4*4);
        std::vector<uint32_t> wdims;
        wdims.push_back(10);
        wdims.push_back(3);
        wdims.push_back(4);
        wdims.push_back(4);
        uint32_t weight_id = smodel.add_tensor(wdims,
                                            fp2ubyte<float>(org_weight));
        // 4.3 add bias to the convolution layer
        std::vector<float> org_bias = rand_weight(10);
        uint32_t bias_id = smodel.add_tensor(std::vector<uint32_t>({10}),
                                            fp2ubyte<float>(org_bias));
        // 4.4 add output for this convolution layer
        uint32_t output_id = smodel.add_tensor(std::vector<uint32_t>({1,10,416/4,416/4}),
                                                std::vector<uint8_t>(), DataLayout_NCHW, TensorType_DYNAMIC);
        // 4.5 add attribute configuration information for the convolution layer
        l.add_input(input_id, "\"input\"");
        l.add_input(weight_id, "\"weights\"");
        l.add_input(bias_id, "\"biases\"");
        l.add_output(output_id, "\"conv2d-output\"");
        l.add_attr("\"padTop\"", fp2ubyte<uint32_t>(std::vector<uint32_t>({0})));
        // 4.6 add the configured convolution layer to the model
        smodel.add_layer(l);
        // 4.7 setting the output information of the model
        smodel.set_as_output(output_id);
        // 4.8 generate model into model file
        smodel.model2file("first.PZKM");
        
    }
    return 0;
}

After the example is written, save it as create_model_sample.cpp, anti setting in src directory.

4.2. Instance compilation and operation

cmake is used for compilation. The specific cmakelists Txt how to write, you can download from the subsequent download link of the whole project to the whole project, and then compile the project by yourself. The compilation command is as follows:

# The compilation commands are as follows
$ mkdir build
$ cd build
$ cmake ..
$ make -j16

After compilation, it will appear in the build/release directory called first_ The executable program of mdel executes the following commands:

# The run command is as follows
$ cd release
$ ./first_model --json ../../model-flatbuffer/pzk-metadata.json
# After running, the following information will be printed out
Result: open ../../model-flatbuffer/pzk-metadata.json success
this model size is 3288
# At the same time, the name generated in this directory is called first Pzkm model file, which is the first model file dedicated to ai model
$ ls
first_model first.PZKM

First Pzkm is our first model file.

4.3 verify whether the model is correct

Because the above model is a binary file and is not readable, we can use flatc tool to convert the binary first The pzkm model file is reinterpreted into a json text file. Now you can view the information inside the model. Use the following command:

$ flatc --raw-binary -t model-flatbuffer/pzk-schema.fbs -- build/release/first.PZKM
$ cat first.json

At this point, we will get the model readable information as follows:

{
  author: "pengzhikang",
  create_time: {
    year: 2021,
    month: 12,
    day: 9,
    hour: 23,
    min: 38,
    sec: 53
  },
  version: "v2.1",
  model_name: "holly-model",
  model_runtime_input_num: 1,
  model_runtime_output_num: 1,
  model_runtime_input_id: [
    0
  ],
  model_runtime_output_id: [
    3
  ],
  all_tensor_num: 4,
  tensor_buffer: [
    {
      name: "model_input_0",
      tesor_type: "DYNAMIC",
      data_type: "FP32",
      shape: {
        dimsize: 4,
        dims: [
          1,
          3,
          416,
          416
        ]
      }
    },
    {
      id: 1,
      name: "tensor_1",
      data_type: "FP32",
      shape: {
        dimsize: 4,
        dims: [
          10,
          3,
          4,
          4
        ]
      },
      weights: {
        ele_bytes: 4,
        ele_num: 480,
        buffer: [...]
      }
    },
    {
      id: 2,
      name: "tensor_2",
      data_type: "FP32",
      shape: {
        dimsize: 4,
        dims: [
          10,
          1,
          1,
          1
        ]
      },
      weights: {
        ele_bytes: 4,
        ele_num: 10,
        buffer: [...]
      }
    },
    {
      id: 3,
      name: "tensor_3",
      tesor_type: "DYNAMIC",
      data_type: "FP32",
      shape: {
        dimsize: 4,
        dims: [
          1,
          10,
          104,
          104
        ]
      }
    }
  ],
  layer_num: 1,
  layer_buffer: [
    {
      name: "conv2d-index-1",
      type: "\"Convolution2dLayer\"",
      input_num: 3,
      input_id: [
        {
          name: "\"input\"",
          necessary: true
        },
        {
          name: "\"weights\"",
          necessary: true,
          tensor_id: 1
        },
        {
          name: "\"biases\"",
          necessary: true,
          tensor_id: 2
        }
      ],
      output_id: [
        {
          name: "\"conv2d-output\"",
          necessary: true,
          tensor_id: 3
        }
      ],
      require_attrs: true,
      attrs: {
        type: "\"Convolution2dLayer\"-Attrs",
        meta_num: 9,
        meta_require_num: 9,
        buffer: [
          {
            key: "\"padTop\"",
            require: true,
            buffer_data: "CHAR",
            buffer: [
              0,
              0,
              0,
              0
            ]
          },
        ]
      }
    }
  ]
}

The json shown in the figure above has been appropriately deleted by me for better display. From the description of the text file, we have verified that our special ai model has finally come into use, and we have finally obtained the first completely customized model file!!

At present, the project has been uploaded to github, and the link is as follows: pengzhikang/Custom-Modelhttps://github.com/pengzhikang/Custom-Model

Topics: Algorithm AI