python elegant output log

Posted by werushka on Sat, 08 Jan 2022 13:59:32 +0100

Explanation:

I When using the logging module
When writing code in python, the basic lines of the logging module are configured as follows:

import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

logger.info('this is another debug message')
logger.warning('this is another debug message')
logger.error('this is another debug message')
logger.info('this is another debug message')


The results are as follows:


II Basic use of loguru module
If you want to be more concise, you can use the loguru library. Python 3 installation: pip3 install loguru.

The default output format of loguru is the above content, including time, level, module name, line number and log information. You can use it directly without manually creating a logger. In addition, its output is still color, which will look more friendly.

The usage is as follows:

from loguru import logger

logger.debug('this is a debug message')
logger.info('this is another debug message')
logger.warning('this is another debug message')
logger.error('this is another debug message')
logger.info('this is another debug message')
logger.success('this is success message!')
logger.critical('this is critical message!')


The results are as follows:


There is no need to configure what is introduced directly into a logger, and then call its debug method.

If you want to export to a file, just:

from loguru import logger

logger.add('my_log.log')
logger.debug('this is a debug')


After running, you will find my in the directory_ log. Log the DEBUG information just output from the console appears. The above is only the basic usage, and more details are below

III Detailed use of logurr
For example, it supports outputting to multiple files, outputting them separately at different levels, creating new files if they are too large, deleting them automatically if they are too long, and so on.

3.1 definition of add method

def add(
        self,
        sink,
        *,
        level=_defaults.LOGURU_LEVEL,
        format=_defaults.LOGURU_FORMAT,
        filter=_defaults.LOGURU_FILTER,
        colorize=_defaults.LOGURU_COLORIZE,
        serialize=_defaults.LOGURU_SERIALIZE,
        backtrace=_defaults.LOGURU_BACKTRACE,
        diagnose=_defaults.LOGURU_DIAGNOSE,
        enqueue=_defaults.LOGURU_ENQUEUE,
        catch=_defaults.LOGURU_CATCH,
        **kwargs
    ):
    pass


Look at its source code. It supports so many parameters, such as level, format, filter, color, etc. in addition, we also notice that it has a very important parameter sink. Let's look at the official document: we can learn that we can pass in a variety of different data structures through sink, which is summarized as follows:

sink can pass in a file object, such as sys Stderr or open('file.log ',' w ') is OK.
sink can directly pass in a str string or pathlib The path object actually represents the file path. If this type is recognized, it will automatically create a log file corresponding to the path and output the log.
sink can be a method that can define its own output implementation.
sink can be the Handler of a logging module, such as FileHandler, StreamHandler, and so on.
Sink can also be a user-defined class. See the official documentation for specific implementation specifications https://loguru.readthedocs.io/en/stable/api/logger.html#sink .
Therefore, the output to file we just demonstrated just now only passes it a str string path, and he creates a log file for us. That's the principle.

3.2 basic parameters
Let's learn more about its other parameters, such as format, filter, level, etc.

In fact, their concepts and formats are basically the same as those of the logging module. For example, format, filter and level are used here to specify the output format:

logger.add('runtime.log', format="{time} {level} {message}", filter="my_module", level="INFO")


3.3 delete sink
In addition, after adding sink, we can also delete it, which is equivalent to refreshing and writing new content.

When deleting, delete according to the id just returned by the add method. See the following example:

from loguru import logger

trace = logger.add('my_log.log')
logger.debug('this is a debug message')
logger.remove(trace)
logger.debug('this is another debug message')



Here, we first add a sink, then get its return value and assign it to trace. Then output a log, pass the trace variable to the remove method, and output a log again to see what the result is.

The console output is as follows:


Log file my_ log. The log contents are as follows:


It can be found that the historical log is indeed deleted after calling the remove method. But in fact, this is not deletion. It is just that after the sink object is removed, the previous content will not be output to the log.

In this way, we can refresh and rewrite the log.

3.4 rotation configuration
With loguru, we can also easily use the rotation configuration. For example, we want to output a log file a day, or the file is too large to automatically separate the log files. We can directly use the rotation parameter of the add method for configuration.

Let's look at the following example:

logger.add('runtime_{time}.log', rotation="500 MB")



With this configuration, we can store one file every 500MB. If each log file is too large, a new log file will be created. We added a time placeholder when configuring the log name, so that the time can be automatically replaced during generation to generate a log file with a file name containing time.

In addition, we can also use the rotation parameter to create log files regularly, for example:

logger.add('runtime_{time}.log', rotation='00:00')


In this way, you can create a new log file output at 0 o'clock every day.

In addition, we can also configure the cycle time of the log file. For example, we can create a log file every other week. The writing method is as follows:

logger.add('runtime_{time}.log', rotation='1 week')


In this way, we can create a log file a week.

3.5 retention configuration
In many cases, some very old logs are useless to us. They occupy some storage space for nothing and will be very wasteful if they are not cleared. Retention this parameter can configure the maximum retention time of the log.

For example, if we want to set the maximum retention of log files for 10 days, we can configure it as follows:

logger.add('runtime.log', retention='10 days')


In this way, the latest 10 day log will be kept in the log file, and mom won't have to worry about log deposition anymore.

3.6 compression configuration
loguru can also configure the compression format of the file, such as saving in zip file format. The example is as follows:

logger.add('runtime.log', compression='zip')


This can save more storage space.

3.7 string formatting
loguru also provides a very friendly string formatting function when outputting log, such as:

logger.info('If you are using Python {}, prefer {feature} of course!', 3.6, feature='f-strings')


In this way, it is very convenient to add parameters.

3.8 Traceback record
In many cases, if we encounter a running error and we accidentally do not configure the Traceback output when printing and outputting the log, it is likely that we will not be able to track the error.

However, after using loguru, we can directly record Traceback with the decorator provided by it. A configuration like this can be used:

@logger.catch
def my_function(x, y, z):
    # An error? It's caught anyway!
    return 1 / (x + y + z)


Let's do a test. When we call, all three parameters are passed in 0, which directly causes the error of dividing by 0 to see what happens:

my_function(0, 0, 0)


After running, you can find that the Traceback information appears in the log, and the variable value at that time is output to us. I really can't praise it any more! The results are as follows:

> File "run.py", line 15, in <module>
    my_function(0, 0, 0)
    └ <function my_function at 0x1171dd510>

  File "/private/var/py/logurutest/demo5.py", line 13, in my_function
    return 1 / (x + y + z)
                │   │   └ 0
                │   └ 0
                └ 0

ZeroDivisionError: division by zero


Therefore, log tracing can be easily realized with loguru, and the debug ging efficiency may be ten times higher?

In addition, loguru has many powerful functions, which will not be explained one by one here. For more content, you can see the official documents of loguru for details: https://loguru.readthedocs.io/en/stable/index.html .

After reading it, it's time to replace your logging module with loguru!
 

Code example:

import datetime
from flask import Flask
from loguru import logger

# trace = logger.add('my_log.log')
# logger.debug('this is a debug message')
# logger.remove(trace)
# logger.debug('this is another debug message')

logger.add('./test_flask_run.log', rotation='10kb', retention='1 minutes', format="{time} {level} {message}", level="INFO")


app = Flask(__name__)


@app.route('/')
def index():
    now_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
    logger.info(f"CALL AT: {now_time}")
    logger.info(f"*"*30)
    logger.info('\n')
    return f'<h1>Manman, good morning, good noon, good evening!| CALL AT: {now_time}<h1>'


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=1234)

Topics: Python Back-end Flask logging