logging.handlers:如何在时间或 maxBytes 之后滚动?

问题描述 投票:0回答:5

我在日志记录方面确实有点困难。我想在一定时间后以及达到一定大小后滚动日志。

一段时间后的翻转是通过

TimedRotatingFileHandler
进行的, 达到一定日志大小后进行翻转由
RotatingFileHandler
进行。

但是

TimedRotatingFileHandler
没有
maxBytes
属性,并且
RotatingFileHandler
在一段时间后无法旋转。 我还尝试将两个处理程序添加到记录器,但结果是日志记录加倍。

我错过了什么吗?

我还研究了

logging.handlers
的源代码。我尝试对
TimedRotatingFileHandler
进行子类化并重写方法
shouldRollover()
以创建一个具有两者功能的类:

class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
    def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
        """ This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler)  """
        # super(self). #It's old style class, so super doesn't work.
        logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0)
        self.maxBytes=maxBytes

    def shouldRollover(self, record):
        """
        Determine if rollover should occur.

        Basically, see if the supplied record would cause the file to exceed
        the size limit we have.

        we are also comparing times        
        """
        if self.stream is None:                 # delay was set...
            self.stream = self._open()
        if self.maxBytes > 0:                   # are we rolling over?
            msg = "%s\n" % self.format(record)
            self.stream.seek(0, 2)  #due to non-posix-compliant Windows feature
            if self.stream.tell() + len(msg) >= self.maxBytes:
                return 1
        t = int(time.time())
        if t >= self.rolloverAt:
            return 1
        #print "No need to rollover: %d, %d" % (t, self.rolloverAt)
        return 0         

但是像这样,日志会创建一个备份并被覆盖。看来我还必须重写方法

doRollover()
,这并不那么容易。

还有其他想法如何创建一个记录器,在一定时间后以及达到一定大小后滚动文件吗?

python logging handlers
5个回答
16
投票

所以我对

TimedRotatingFileHandler
做了一个小修改,以便能够在时间和大小之后进行翻转。我必须修改
__init__
shouldRollover
doRollover
getFilesToDelete
(见下文)。这是结果,当我设置when='M',interval=2,backupCount=20,maxBytes=1048576时:

-rw-r--r-- 1 user group  185164 Jun 10 00:54 sumid.log
-rw-r--r-- 1 user group 1048462 Jun 10 00:48 sumid.log.2011-06-10_00-48.001    
-rw-r--r-- 1 user group 1048464 Jun 10 00:48 sumid.log.2011-06-10_00-48.002    
-rw-r--r-- 1 user group 1048533 Jun 10 00:49 sumid.log.2011-06-10_00-48.003    
-rw-r--r-- 1 user group 1048544 Jun 10 00:50 sumid.log.2011-06-10_00-49.001    
-rw-r--r-- 1 user group  574362 Jun 10 00:52 sumid.log.2011-06-10_00-50.001

您可以看到,前四个日志在达到 1MB 大小后进行了滚动,而最后一次滚动发生在两分钟后。到目前为止,我还没有测试删除旧日志文件,所以它可能不起作用。 该代码对于 backupCount>=1000 肯定不起作用。我在文件名末尾仅附加三位数字。

这是修改后的代码:

class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
    def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
        """ This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler)  """
        logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when, interval, backupCount, encoding, delay, utc)
        self.maxBytes=maxBytes

    def shouldRollover(self, record):
        """
        Determine if rollover should occur.

        Basically, see if the supplied record would cause the file to exceed
        the size limit we have.

        we are also comparing times        
        """
        if self.stream is None:                 # delay was set...
            self.stream = self._open()
        if self.maxBytes > 0:                   # are we rolling over?
            msg = "%s\n" % self.format(record)
            self.stream.seek(0, 2)  #due to non-posix-compliant Windows feature
            if self.stream.tell() + len(msg) >= self.maxBytes:
                return 1
        t = int(time.time())
        if t >= self.rolloverAt:
            return 1
        #print "No need to rollover: %d, %d" % (t, self.rolloverAt)
        return 0         

    def doRollover(self):
        """
        do a rollover; in this case, a date/time stamp is appended to the filename
        when the rollover happens.  However, you want the file to be named for the
        start of the interval, not the current time.  If there is a backup count,
        then we have to get a list of matching filenames, sort them and remove
        the one with the oldest suffix.
        """
        if self.stream:
            self.stream.close()
        # get the time that this sequence started at and make it a TimeTuple
        currentTime = int(time.time())
        dstNow = time.localtime(currentTime)[-1]
        t = self.rolloverAt - self.interval
        if self.utc:
            timeTuple = time.gmtime(t)
        else:
            timeTuple = time.localtime(t)
            dstThen = timeTuple[-1]
            if dstNow != dstThen:
                if dstNow:
                    addend = 3600
                else:
                    addend = -3600
                timeTuple = time.localtime(t + addend)
        dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
        if self.backupCount > 0:
            cnt=1
            dfn2="%s.%03d"%(dfn,cnt)
            while os.path.exists(dfn2):
                dfn2="%s.%03d"%(dfn,cnt)
                cnt+=1                
            os.rename(self.baseFilename, dfn2)
            for s in self.getFilesToDelete():
                os.remove(s)
        else:
            if os.path.exists(dfn):
                os.remove(dfn)
            os.rename(self.baseFilename, dfn)
        #print "%s -> %s" % (self.baseFilename, dfn)
        self.mode = 'w'
        self.stream = self._open()
        newRolloverAt = self.computeRollover(currentTime)
        while newRolloverAt <= currentTime:
            newRolloverAt = newRolloverAt + self.interval
        #If DST changes and midnight or weekly rollover, adjust for this.
        if (self.when == 'MIDNIGHT' or self.when.startswith('W')) and not self.utc:
            dstAtRollover = time.localtime(newRolloverAt)[-1]
            if dstNow != dstAtRollover:
                if not dstNow:  # DST kicks in before next rollover, so we need to deduct an hour
                    addend = -3600
                else:           # DST bows out before next rollover, so we need to add an hour
                    addend = 3600
                newRolloverAt += addend
        self.rolloverAt = newRolloverAt

    def getFilesToDelete(self):
        """
        Determine the files to delete when rolling over.

        More specific than the earlier method, which just used glob.glob().
        """
        dirName, baseName = os.path.split(self.baseFilename)
        fileNames = os.listdir(dirName)
        result = []
        prefix = baseName + "."
        plen = len(prefix)
        for fileName in fileNames:
            if fileName[:plen] == prefix:
                suffix = fileName[plen:-4]
                if self.extMatch.match(suffix):
                    result.append(os.path.join(dirName, fileName))
        result.sort()
        if len(result) < self.backupCount:
            result = []
        else:
            result = result[:len(result) - self.backupCount]
        return result            

7
投票

如果您确实需要此功能,请基于 TimedRotatingFileHandler 编写自己的处理程序,主要使用时间进行滚动,但将基于大小的滚动合并到现有逻辑中。您已经尝试过此操作,但您需要(至少)重写

shouldRollover()
doRollover()
方法。第一种方法确定何时滚动,第二种方法关闭当前日志文件,重命名现有文件并删除过时的文件,然后打开新文件。

doRollover()
逻辑可能有点棘手,但肯定可行。


3
投票

这是我使用的:

import logging

class  EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler, logging.handlers.RotatingFileHandler):
    '''
        cf http://stackoverflow.com/questions/29602352/how-to-mix-logging-handlers-file-timed-and-compress-log-in-the-same-config-f

         Spec:
         Log files limited in size & date. I.E. when the size or date is overtaken, there is a file rollover
     '''

    ########################################


    def __init__(self, filename, mode = 'a', maxBytes = 0, backupCount = 0, encoding = None,
             delay = 0, when = 'h', interval = 1, utc = False):

        logging.handlers.TimedRotatingFileHandler.__init__(
        self, filename, when, interval, backupCount, encoding, delay, utc)

         logging.handlers.RotatingFileHandler.__init__(self, filename, mode, maxBytes, backupCount, encoding, delay)

     ########################################

     def computeRollover(self, currentTime):
         return logging.handlers.TimedRotatingFileHandler.computeRollover(self, currentTime)

    ########################################

    def getFilesToDelete(self):
        return logging.handlers.TimedRotatingFileHandler.getFilesToDelete(self)

    ########################################

    def doRollover(self):
        return logging.handlers.TimedRotatingFileHandler.doRollover(self)

    ########################################

    def shouldRollover(self, record):
         """ Determine if rollover should occur. """
         return (logging.handlers.TimedRotatingFileHandler.shouldRollover(self, record) or logging.handlers.RotatingFileHandler.shouldRollover(self, record))

1
投票

我根据我的用途改编了 Julien 的代码。现在,它会在达到一定日志大小或一段时间后滚动。

class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler, logging.handlers.RotatingFileHandler):

def __init__(self, filename, mode='a', maxBytes=0, backupCount=0, encoding=None,
             delay=0, when='h', interval=1, utc=False):
    logging.handlers.TimedRotatingFileHandler.__init__(
        self, filename=filename, when=when, interval=interval,
        backupCount=backupCount, encoding=encoding, delay=delay, utc=utc)

    logging.handlers.RotatingFileHandler.__init__(self, filename=filename, mode=mode, maxBytes=maxBytes,
                                                  backupCount=backupCount, encoding=encoding, delay=delay)

def computeRollover(self, current_time):
    return logging.handlers.TimedRotatingFileHandler.computeRollover(self, current_time)

def doRollover(self):
    # get from logging.handlers.TimedRotatingFileHandler.doRollover()
    current_time = int(time.time())
    dst_now = time.localtime(current_time)[-1]
    new_rollover_at = self.computeRollover(current_time)

    while new_rollover_at <= current_time:
        new_rollover_at = new_rollover_at + self.interval

    # If DST changes and midnight or weekly rollover, adjust for this.
    if (self.when == 'MIDNIGHT' or self.when.startswith('W')) and not self.utc:
        dst_at_rollover = time.localtime(new_rollover_at)[-1]
        if dst_now != dst_at_rollover:
            if not dst_now:  # DST kicks in before next rollover, so we need to deduct an hour
                addend = -3600
            else:  # DST bows out before next rollover, so we need to add an hour
                addend = 3600
            new_rollover_at += addend
    self.rolloverAt = new_rollover_at

    return logging.handlers.RotatingFileHandler.doRollover(self)

def shouldRollover(self, record):
    return logging.handlers.TimedRotatingFileHandler.shouldRollover(self, record) or logging.handlers.RotatingFileHandler.shouldRollover(self, record)

0
投票

sumid回答开始,我做了一些更改、修复,并与2024年最新版本的TimedRotatingFileHandlerRotatingFileHandler合并。

现在:

  • 如果白天日志文件达到
    max_bytes
    ,它将轮换 添加当前日期和渐进后缀。它被认为是从 000 到 999 的渐进掩码,之后最旧的文件将逐渐被覆盖。如果需要更多日志,请调整
    maxBytes
    参数或更改代码中出现的
    .%03d
  • 当天的轮换文件按相反顺序(对数旋转样式):更大的渐进后缀旨在表示当天最旧的文件。
  • 如果
    when='MIDNIGHT'
    日志文件将 在日期变化时轮换并添加包含以下内容的后缀 之前的日期和渐进标识符。
  • backupCount=numberOfDays
    参数用于设置每个日志文件在文件夹中保留的天数,它被认为是文件的最后一次更改(因此是轮换的那一刻)。因此,如果
    backupCount=30
    将删除所有超过 30 天的文件。

每个日志文件都会轮换并命名为:

..
(例如 my_app_log_file.log.2024-04-03.016)

class EnhancedTimedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
    def __init__(self, filename, when='MIDNIGHT', interval=1, backup_count=0, encoding=None, delay=0, utc=0,
                 max_bytes=0):
        """
        This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler)
        If when='MIDNIGHT' the logger will be rotated at the day change and added a suffix containing day and identifier
        If during the day the log file reach 'max_bytes', it will rotated by adding current date and progressive suffix.
        Rotated files in the day are in increasing order, greater progressive suffix is intended as oldest file in the day

        Each rotated file is named as:
        <base_name>.<current_date>.<progressive_id> (ex. my_app_log_file.log.2024-04-03.016)
        """
        super().__init__(filename, when, interval, backup_count, encoding, delay, utc)
        self.maxBytes = max_bytes

        # Update matching file extension suffix mask to "%Y-%m-%d.123"
        extMatch = r"(?<!\d)\d{4}-\d{2}-\d{2}.\d{3}(?!\d)"
        self.extMatch = re.compile(extMatch, re.ASCII)

    def shouldRollover(self, record):
        """
        Determine if rollover should occur.

        Basically, see if the supplied record would cause the file to exceed
        the size limit we have.

        we are also comparing times
        """
        if os.path.exists(self.baseFilename) and not os.path.isfile(self.baseFilename):
            return False
        if self.stream is None:  # delay was set...
            self.stream = self._open()
        if self.maxBytes > 0:  # are we rolling over?
            msg = "%s\n" % self.format(record)
            self.stream.seek(0, 2)  # due to non-posix-compliant Windows feature
            if self.stream.tell() + len(msg) >= self.maxBytes:
                return 1

        t = int(time.time())
        if t >= self.rolloverAt:
            if os.path.exists(self.baseFilename) and not os.path.isfile(self.baseFilename):
                self.rolloverAt = self.computeRollover(t)
                return 0
            return 1
        # print "No need to rollover: %d, %d" % (t, self.rolloverAt)
        return 0

    def doRollover(self):
        """
        do a rollover; in this case, a date/time stamp is appended to the filename
        when the rollover happens.  However, you want the file to be named for the
        start of the interval, not the current time.  If there is a backup count,
        then we have to get a list of matching filenames, sort them and remove
        the one with the oldest suffix.
        """
        if self.stream:
            self.stream.close()
            self.stream = None
        # get the time that this sequence started at and make it a TimeTuple
        currentTime = int(time.time())
        dstNow = time.localtime(currentTime)[-1]
        t = self.rolloverAt - self.interval
        if self.utc:
            timeTuple = time.gmtime(t)
        else:
            timeTuple = time.localtime(t)
            dstThen = timeTuple[-1]
            if dstNow != dstThen:
                if dstNow:
                    addend = 3600
                else:
                    addend = -3600
                timeTuple = time.localtime(t + addend)
        dfn_base = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)

        if self.backupCount > 0:
            cnt = 1
            dfn2 = "%s.%03d" % (dfn_base, cnt)
            # search for the oldest (highest id of the day) existing rotated log
            while os.path.exists(dfn2):
                cnt += 1
                dfn2 = "%s.%03d" % (dfn_base, cnt)
            # rename all logs beginning from the oldest to newer by adding 1 to all
            for i in range(cnt, 1, -1):
                sfn = "%s.%03d" % (dfn_base, i - 1)
                dfn = "%s.%03d" % (dfn_base, i)

                if os.path.exists(sfn) is True and os.path.exists(dfn) is False:
                    os.rename(sfn, dfn)

            # rotate the current log file to the <base_name>.<current_date>.001
            dfn2 = "%s.%03d" % (dfn_base, 1)
            os.rename(self.baseFilename, dfn2)

            # delete all existing file names older than backupCount
            # backupCount=30 is considered to delete all files older than 30 days
            for s in self.getFilesToDelete():
                os.remove(s)
        else:
            if os.path.exists(dfn_base):
                os.remove(dfn_base)

            os.rename(self.baseFilename, dfn_base)

        self.mode = 'w'
        self.stream = self._open()

        # compute if daily rollover is needed
        newRolloverAt = self.computeRollover(currentTime)
        while newRolloverAt <= currentTime:
            newRolloverAt = newRolloverAt + self.interval
        # If DST changes and midnight or weekly rollover, adjust for this.
        if (self.when == 'MIDNIGHT' or self.when.startswith('W')) and not self.utc:
            dstAtRollover = time.localtime(newRolloverAt)[-1]
            if dstNow != dstAtRollover:
                if not dstNow:  # DST kicks in before next rollover, so we need to deduct an hour
                    addend = -3600
                else:  # DST bows out before next rollover, so we need to add an hour
                    addend = 3600
                newRolloverAt += addend
        self.rolloverAt = newRolloverAt

    def getFilesToDelete(self):
        """
        Determine the files to delete when rolling over.

        More specific than the earlier method, which just used glob.glob().
        """
        dirName, baseName = os.path.split(self.baseFilename)
        fileNames = os.listdir(dirName)
        result = {}
        if self.namer is None:
            prefix = baseName + '.'
            plen = len(prefix)
            for fileName in fileNames:
                if fileName[:plen] == prefix:
                    suffix = fileName[plen:]
                    if self.extMatch.fullmatch(suffix):
                        full_file_name = os.path.join(dirName, fileName)
                        result[full_file_name] = full_file_name
        else:
            # searching in the base folder all the rotated log files
            for fileName in fileNames:
                # Our files could be just about anything after custom naming,
                # but they should contain the datetime suffix.
                # Try to find the datetime suffix in the file name and verify
                # that the file name can be generated by this handler.
                m = self.extMatch.search(fileName)
                while m:
                    dfn = self.namer(self.baseFilename + "." + m[0])
                    if os.path.basename(dfn) == fileName:
                        full_file_name = os.path.join(dirName, fileName)
                        result[full_file_name] = full_file_name
                        break
                    m = self.extMatch.search(fileName, m.start() + 1)

        # check if one or more file name have been modified before than backupCount
        # backupCount=30 is considered to delete all files older than 30 days
        # in one day could be more than one log file
        for file in list(result.keys()):
            last_change = os.path.getmtime(result[file])
            if time.time() - last_change < self.backupCount * 24 * 3600:
                # files younger than self.backupCount are removed
                del result[file]

        return result.keys()

我知道它可以改进,并且肯定包含错误。如有任何建议,我们将不胜感激。

© www.soinside.com 2019 - 2024. All rights reserved.