Full stack blog development, improvement and containerization of the project

Posted by tonbah on Sun, 05 Dec 2021 02:08:04 +0100

Original link:

https://llfc.club/category?catid=20RbopkFO8nsJafpgCwwxXoCWAs#!aid/21lXHiY0k69T0TXhqqk9BsrlPZy

Objectives of this section

In the previous section, the template rendering was completed. In my spare time, I added several pages. You can view the work done every day according to the branches. In this section, the reading of configuration files is added to complete the addition of redis cache. Some information has priority to access redis cache. In addition, the log library is added to print logs. Finally, the containerization of the project is completed

redis cache

The previous article described the addition, deletion, modification and query of redis, which is similar to the previous redis operation to increase the query efficiency of the article.
Initialize redis connection pool

func InitRedis() {
	rediscli = redis.NewClient(&redis.Options{
		Addr:         config.TotalCfgData.Redis.Host,
		Password:     config.TotalCfgData.Redis.Passwd,
		DB:           config.TotalCfgData.Redis.DB,
		PoolSize:     config.TotalCfgData.Redis.PoolSize,
		MinIdleConns: config.TotalCfgData.Redis.IdleCons,
	})
 
	_, err := rediscli.Ping().Result()
	if err != nil {
		log.Println("ping failed, error is ", err)
		return
	}
 
	clearch = make(chan struct{}, 1000)
	exitch = make(chan struct{})
	log.Println("redis init success!!!")
}

For example, we save the session in redis

func AddAdminSession(sessionId string, sessionData string) error {
	_, err := rediscli.HSet(ADMIN_SESSION, sessionId, sessionData).Result()
	rediscli.Expire(ADMIN_SESSION, time.Hour*24*30)
	return err
}

  

By analogy, many redis read-write modules are added, which will not be repeated one by one.

Configuration file reading

The configuration files used by the server are written in config/config.toml

[mongo]
    host = "81.68.86.123:27017"
    user = "admin"
    passwd = "12345678"
    maxpoolsize = 10
    contimeout  = "5000"
    maxconidle = "5000"
    database = "blog"
 
[cookie]
    host = "81.68.86.123"
    #To start locally, please set host to local
    #host = "localhost"
    alive = 86400
 
[location]
    timezone = "Asia/Shanghai"
 
[redis]
    host = "81.68.86.123:6379"
    idlecons = 16
    poolsize = 1024
    idletimeout = 300
    passwd = "123456"
    db = 0

Then, config.go is implemented to read the configuration file

package config
 
import (
	"log"
 
	"flag"
 
	"github.com/BurntSushi/toml"
)
 
type MongoCfg struct {
	User        string `toml: "user"`
	Passwd      string `toml: "passwd"`
	Host        string `toml: "host"`
	MaxPoolSize int16  `toml: "maxpoolsize"`
	MaxConIdle  string `toml:"maxconidle"`
	ConTimeOut  string `toml: "contimeout"`
	Database    string `toml: "database"`
}
 
type CookieCfg struct {
	Host  string `toml: "host"`
	Alive int    `toml: "alive"`
}
 
type RedisCfg struct {
	Host        string `toml: "host"`
	PoolSize    int    `toml: "poolsize"`
	IdleCons    int    `toml: "idlecons"`
	IdleTimeout int    `toml: "idletimeout"`
	Passwd      string `toml: "passwd"`
	DB          int    `toml: "db"`
}
 
type TotalCfg struct {
	Mongo     MongoCfg  `toml: "mongo"`
	Cookie    CookieCfg `toml: "cookie"`
	Location_ Location  `toml:"location"`
	Redis     RedisCfg  `toml:"redis"`
}
 
type Location struct {
	TimeZone string `toml:"timezone"`
}
 
var TotalCfgData TotalCfg
 
func init() {
	cfgpath := flag.String("config", "./config/config.toml", "-config ./config/config.toml")
	flag.Parse()
	if _, err := toml.DecodeFile(*cfgpath, &TotalCfgData); err != nil {
		log.Println("decode file failed , error is ", err)
		panic("decode file failed")
	}
}

  

The above code reads the configuration file according to the toml tag and writes the corresponding field into the structure object to complete the reading.

Add log Library

For the choice of log library, I chose the zap library provided by uber, and cooperated with lumberjack to complete log cutting

package logger
 
import (
	"github.com/natefinch/lumberjack"
	"go.uber.org/zap"
	"go.uber.org/zap/zapcore"
)
 
var Sugar *zap.SugaredLogger = nil
 
func getLogWriter() zapcore.WriteSyncer {
 
	lumberJackLogger := &lumberjack.Logger{
		Filename:   "./log/blog.log",
		MaxSize:    10,
		MaxBackups: 5,
		MaxAge:     30,
		Compress:   false,
	}
 
	return zapcore.AddSync(lumberJackLogger)
}
 
func init() {
	// Encoder configuration
	config := zap.NewProductionEncoderConfig()
	// Specify time encoder
	config.EncodeTime = zapcore.ISO8601TimeEncoder
	// Log level in uppercase
	config.EncodeLevel = zapcore.CapitalLevelEncoder
	// encoder
	encoder := zapcore.NewConsoleEncoder(config)
	writeSyncer := getLogWriter()
	// Create Logger
	core := zapcore.NewCore(encoder, writeSyncer, zapcore.DebugLevel)
	logger := zap.New(core, zap.AddCaller())
	Sugar = logger.Sugar()
	// Print log
	Sugar.Info("logger init success")
}

  

Set the maximum log file size to 10M, backup up to five files for up to 30 days, and support functions such as printing line numbers.

Containerization

First implement the dockerfile, and then generate the image

FROM golang:1.16
# Set the necessary environment variables for our image
ENV GO111MODULE=on \
    CGO_ENABLED=0 \
    GOOS=linux \
    GOARCH=amd64 \
	GOPROXY="https://goproxy.cn,direct"
 
ENV TZ=Asia/Shanghai
 
# Create code directory
WORKDIR  /src
# copy code into the code directory
COPY . .
# Compile the code into a binary executable
RUN go build -o main .
# Create a running environment
WORKDIR /bin
#Move binaries from / src to / bin
RUN cp /src/main .
# copy the configuration and static resources required by the project to this directory
RUN cp -r /src/config .
RUN cp -r /src/public .
RUN cp -r /src/views .
 
# Exposed port external service
EXPOSE 8080
 
# Start container run command
CMD ["/bin/main"]

Set ENV to prepare the running environment of golang. Set the time zone to Shanghai and the working directory to / src. Then copy the source code from the host into the image, compile and run it
In the root directory of the project, execute the following command to generate the image

docker build -t blog .

Just start the container

docker run --name blogds -p 8088:8088 -v /data/blog/log:/bin/log  --restart=always  -d blog

  

summary

So far, we have completed the development of the background of the blog system. We can't explain all the details with only three articles. We just list several stages experienced by the background system. The specific code can be seen in github. Thank you for enjoying the stars.
Source address:
https://github.com/secondtonone1/bstgo-blog

Topics: Go