多个处理器记录到同一旋转文件

问题描述 投票:0回答:3

我的 nginx+uwsgi+django 站点遇到问题。 我知道这对 django+uwsgi 来说没什么特别的,应该是日志模块本身的东西。

在我的网站中,我使用 RotatingFileHandler 来记录特殊条目,但是,当 uwsgi 使用多个工作处理器运行时,今天我发现, 多个日志文件同时发生变化。例如,这是文件片段:

[root@speed logs]# ls -lth
total 18M
-rw-rw-rw- 1 root root  2.1M Sep 14 19:44 backend.log.7
-rw-rw-rw- 1 root root  1.3M Sep 14 19:43 backend.log.6
-rw-rw-rw- 1 root root  738K Sep 14 19:43 backend.log.3
-rw-rw-rw- 1 root root  554K Sep 14 19:43 backend.log.1
-rw-rw-rw- 1 root root 1013K Sep 14 19:42 backend.log.4
-rw-rw-rw- 1 root root  837K Sep 14 19:41 backend.log.5
-rw-rw-rw- 1 root root  650K Sep 14 19:40 backend.log.2
-rw-rw-rw- 1 root root  656K Sep 14 19:40 backend.log
-rw-r--r-- 1 root root   10M Sep 13 10:11 backend.log.8
-rw-r--r-- 1 root root     0 Aug 21 15:53 general.log
[root@speed-app logs]#

实际上,我将旋转文件设置为每个文件10M,最多10个文件。

我用谷歌搜索了很多,很多人之前都遇到过这个,似乎日志模块本身不支持这个。

我发现有人提到了ConcurrentLogHandler(https://pypi.python.org/pypi/ConcurrentLogHandler/0.9.1)。 有人以前用过这个人吗?我看是基于文件锁的,不知道这家伙的表现够不够好

或者任何人有更好的主意将多个 uwsig 实例记录到同一个旋转文件中?

谢谢。 韦斯利

python django logging uwsgi
3个回答
6
投票

只是为了好玩,这里是一个完整的解决方案示例,它使用 python StreamHandler、uWSGI“守护进程文件日志记录”和

logrotate
守护进程通过轮换记录到文件。

正如您将看到的,uWSGI 日志记录从您的应用程序捕获 stdout/stderr 并将其重定向到 stdout/stderr (默认情况下)或定义的其他记录器/处理程序。

设置 Django/uWSGI

你的姜戈

settings.py

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'default': {
            'format': '%(asctime)s - %(process)s - %(levelname)s - %(name)s : %(message)s',
        },
    },
    'handlers': {
        'console': {
            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
        },
    },
    'root': {
        'handlers': ['console'],
        'level': 'DEBUG',
    },
}

代码中的某个地方

log = logging.getLogger(__name__)
log.info("test log!")

使用一些日志参数运行uWSGI

$ uwsgi --http :9090 --chdir=`pwd -P` --wsgi-file=wsgi.py \
    --daemonize=test.log \  # daemonize AND set log file
    --log-maxsize=10000  \  # a 10k file rotate
    --workers=4             # start 4 workers

输出

测试日志摘录

*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 79755)
spawned uWSGI worker 1 (pid: 79813, cores: 1)
spawned uWSGI worker 2 (pid: 79814, cores: 1)
spawned uWSGI worker 3 (pid: 79815, cores: 1)
spawned uWSGI worker 4 (pid: 79816, cores: 1)
spawned uWSGI http 1 (pid: 79817)
2015-10-12 07:55:48,458 - 79816 - INFO - testapp.views : test log!
2015-10-12 07:55:51,440 - 79813 - INFO - testapp.views : test log!
2015-10-12 07:55:51,965 - 79814 - INFO - testapp.views : test log!
2015-10-12 07:55:52,810 - 79815 - INFO - testapp.views : test log!

在同一个目录中,一段时间后:

-rw-r-----   1 big  staff   1.0K Oct 12 09:56 test.log
-rw-r-----   1 big  staff    11K Oct 12 09:55 test.log.1444636554

日志旋转

或者,要自行处理旋转文件,请省略

--log-maxsize
参数并使用
logrotate
配置文件 (
/etc/logrotate.d/uwsgi-test-app
):

/home/demo/test_django/*log {
    rotate 10
    size 10k
    daily
    compress
    delaycompress
}

请注意,上述值只是为了举例,您可能不希望旋转大小为 10k。有关 logrotate 格式的更多信息,请参阅示例博客文章


1
投票

如果您使用python的logrotation(当多个gunicorn进程指向同一个日志文件时),那么您应该确保主日志文件在轮换期间仅被编辑而不是重命名、移动等。为此,您复制主日志文件,然后将其清除!

翻转方法的片段(在 logging.handlers.RotatingFileHandler 的代码中编辑)

def doRollover(self):
    self.stream.close()
    if self.backupCount > 0:
        for i in range(self.backupCount - 1, 0, -1):
            sfn = "%s.%d" % (self.baseFilename, i)
            dfn = "%s.%d" % (self.baseFilename, i + 1)
            if os.path.exists(sfn):
                if os.path.exists(dfn):
                    os.remove(dfn)
                os.rename(sfn, dfn)
        dfn = self.baseFilename + ".1"
        if os.path.exists(dfn):
            os.remove(dfn)
        # os.rename(self.baseFilename, dfn) # Intead of this
        # Do this
        shutil.copyfile(self.baseFilename, dfn)
        open(self.baseFilename, 'w').close()
    if self.encoding:
        self.stream = codecs.open(self.baseFilename, "w", self.encoding)
    else:
        self.stream = open(self.baseFilename, "w")

然后你可以像这样创建你的记录器:

logger = logging.getLogger(logfile_name)
logfile = '{}/{}.log'.format(logfile_folder, logfile_name)
handler = RotatingFileHandler(
    logfile, maxBytes=maxBytes, backupCount=10
)
formatter = logging.Formatter(format, "%Y-%m-%d_%H:%M:%S")
formatter.converter = time.gmtime
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
logger.isEnabledFor = lambda level: True
logger.propagate = 0

logger.warning("This is a log")

0
投票

我也面临着类似的问题,有人可以帮助我吗

import os
import fcntl
import logging.handlers
from logging import Handler
from logging.handlers import BaseRotatingHandler

class ConcurrentRotatingFileHandler(BaseRotatingHandler):
    def __init__(self, filename, mode='a', maxBytes=0, backupCount=0,
                 encoding=None):
        BaseRotatingHandler.__init__(self, filename, mode, encoding)
        self.maxBytes = maxBytes
        self.backupCount = backupCount
        head, tail = os.path.split(filename)
        self.stream_lock = open("{}/.{}.lock".format(head, tail), "w")

    def _openFile(self, mode):
        self.stream = open(self.baseFilename, mode)

    def acquire(self):
        Handler.acquire(self)
        fcntl.flock(self.stream_lock, fcntl.LOCK_EX)
        if self.stream.closed:
            self._openFile(self.mode)

    def release(self):
        if not self.stream.closed:
            self.stream.flush()
        if not self.stream_lock.closed:
            fcntl.flock(self.stream_lock, fcntl.LOCK_UN)
        Handler.release(self)

    def close(self):
        if not self.stream.closed:
            self.stream.flush()
            self.stream.close()
        if not self.stream_lock.closed:
            self.stream_lock.close()
        Handler.close(self)

    def flush(self):
        pass

    def doRollover(self):
        self.stream.close()
        if self.backupCount <= 0:
            self._openFile(self.mode)
            return
        try:
            tmpname = "{}.rot.{}".format(self.baseFilename, os.getpid())
            os.rename(self.baseFilename, tmpname)
            for i in range(self.backupCount - 1, 0, -1):
                sfn = "%s.%d" % (self.baseFilename, i)
                dfn = "%s.%d" % (self.baseFilename, i + 1)
                if os.path.exists(sfn):
                    if os.path.exists(dfn):
                        os.remove(dfn)
                    os.rename(sfn, dfn)
            dfn = self.baseFilename + ".1"
            if os.path.exists(dfn):
                os.remove(dfn)
            os.rename(tmpname, dfn)
        finally:
            self._openFile(self.mode)

    def shouldRollover(self, record):
        def _shouldRollover():
            if self.maxBytes > 0:
                if self.stream.tell() >= self.maxBytes:
                    return True
            f.close()
            return False

        if _shouldRollover():
            self.stream.close()
            self._openFile(self.mode)
            return _shouldRollover()
        return False

# Publish this class to the "logging.handlers" module so that it can be use
# from a logging config file via logging.config.fileConfig().

logging.handlers.ConcurrentRotatingFileHandler = ConcurrentRotatingFileHandler

()[root@overcloudtrain4-controller-0 concurrent_log_handler]# du -sh  /var/log/mylog.*
0   /var/log/mylog.log
28K /var/log/mylog.log.1
1.2M    /var/log/mylog.log.2
8.0K    /var/log/mylog.log.2.gz
12K /var/log/mylog.log.3.gz
20K /var/log/mylog.log.4.gz
© www.soinside.com 2019 - 2024. All rights reserved.