RaspberryPI 线程随着时间的推移会导致延迟

问题描述 投票:0回答:1

简介

我正在 Raspberry Pi 上使用 Python 3.9 开发多线程应用程序,它利用

ThreadPoolExecutor
来管理多个线程。

随着时间的推移,多个函数的执行开始出现延迟。最初,按下按钮后,该函数会在一秒钟内执行,但 2 天后,该函数需要接近 10 秒才能完成执行。

我还注意到,尽管任务的队列长度保持不变,但内存使用量随着时间的推移逐渐增加。我怀疑这可能与线程的管理方式或线程内的内存处理方式有关。

问题是怎么产生的:

只有在我添加以下功能后才会发生这种情况:

def trigger_relay_one(thirdPartyOption=None):
    outputPin = Relay_1

    if thirdPartyOption == "GEN_OUT_1":
        outputPin = GEN_OUT_1

    if thirdPartyOption == "GEN_OUT_2":
        outputPin = GEN_OUT_2

    if thirdPartyOption == "GEN_OUT_3":
        outputPin = GEN_OUT_3

    try:
        setGpioMode()
        setupRelayPin(outputPin)
        logger.info("Before toggleRelay1 thread")
        thread_pool_executor.submit(toggleRelay1, outputPin, 'High', 5000, 1000, 1)
        logger.info("After thread is submitted")
    except RuntimeError:
        return

def update_logs_and_server(dictionary):
    def thread_task():
        update(path + "/json/archivedLogs.json", archived_logs_lock, dictionary)
        update(path + "/json/pendingLogs.json", pending_logs_lock, dictionary)

        update_server_events()

    thread_pool_executor.submit(thread_task)

每次 RaspberryPi 接收到输入时,都会执行上述函数,例如按一下按钮。实现上述内容是为了防止日志更新被阻塞,从而允许 RaspberryPi 继续监听输入。

Tracemalloc 和 HTOP 日志

以下是随时间流逝的日志:

Tracemalloc

// The following are three snapshots taken 24 hours apart
Top 10 lines
#1: /usr/lib/python3.9/threading.py:817: 2702.5 KiB
    self._initialized = True
#2: /usr/lib/python3.9/threading.py:381: 2315.4 KiB
    self.notify(len(self._waiters))
#3: /usr/lib/python3.9/threading.py:803: 1136.6 KiB
    self._target = target
#4: /usr/lib/python3.9/threading.py:522: 1066.9 KiB
    self._cond = Condition(Lock())
#5: /usr/lib/python3.9/threading.py:1205: 960.1 KiB
    def invoke_excepthook(thread):
#6: /usr/lib/python3.9/threading.py:820: 889.0 KiB
    self._invoke_excepthook = _make_invoke_excepthook()
#7: /usr/lib/python3.9/_weakrefset.py:85: 576.1 KiB
    self.data.add(ref(item, self._remove))
#8: /usr/lib/python3.9/threading.py:250: 441.2 KiB
    self._waiters = _deque()
#9: /usr/lib/python3.9/threading.py:231: 355.6 KiB
    self._lock = lock
#10: /usr/lib/python3.9/threading.py:930: 320.2 KiB
    self._tstate_lock = _set_sentinel()
813 other: 2401.7 KiB
Total allocated size: 13165.1 KiB

Top 10 lines
#1: /usr/lib/python3.9/threading.py:817: 12771.6 KiB
    self._initialized = True
#2: /usr/lib/python3.9/threading.py:381: 11052.4 KiB
    self.notify(len(self._waiters))
#3: /usr/lib/python3.9/threading.py:803: 5374.9 KiB
    self._target = target
#4: /usr/lib/python3.9/threading.py:522: 5041.5 KiB
    self._cond = Condition(Lock())
#5: /usr/lib/python3.9/threading.py:1205: 4537.3 KiB
    def invoke_excepthook(thread):
#6: /usr/lib/python3.9/threading.py:820: 4201.2 KiB
    self._invoke_excepthook = _make_invoke_excepthook()
#7: /usr/lib/python3.9/_weakrefset.py:85: 2536.5 KiB
    self.data.add(ref(item, self._remove))
#8: /usr/lib/python3.9/threading.py:250: 2031.8 KiB
    self._waiters = _deque()
#9: /usr/lib/python3.9/threading.py:231: 1680.5 KiB
    self._lock = lock
#10: /usr/lib/python3.9/threading.py:751: 1543.5 KiB
    return template % _counter()
793 other: 9620.2 KiB
Total allocated size: 60391.3 KiB
Top 10 lines

Top 10 lines
#1: /usr/lib/python3.9/threading.py:817: 32927.9 KiB
    self._initialized = True
#2: /usr/lib/python3.9/threading.py:381: 28504.0 KiB
    self.notify(len(self._waiters))
#3: /usr/lib/python3.9/threading.py:803: 13854.6 KiB
    self._target = target
#4: /usr/lib/python3.9/threading.py:522: 12998.0 KiB
    self._cond = Condition(Lock())
#5: /usr/lib/python3.9/threading.py:1205: 11698.1 KiB
    def invoke_excepthook(thread):
#6: /usr/lib/python3.9/threading.py:820: 10831.5 KiB
    self._invoke_excepthook = _make_invoke_excepthook()
#7: /usr/lib/python3.9/_weakrefset.py:85: 5947.4 KiB
    self.data.add(ref(item, self._remove))
#8: /usr/lib/python3.9/threading.py:250: 5221.6 KiB
    self._waiters = _deque()
#9: /usr/lib/python3.9/threading.py:231: 4332.7 KiB
    self._lock = lock
#10: /usr/lib/python3.9/threading.py:751: 4007.5 KiB
    return template % _counter()
868 other: 23689.3 KiB
Total allocated size: 154012.5 KiB

H顶

// The following were snapshots taken 24 hours apart
2024-04-17 14:59:28,126 - piProperty - INFO - 2024-04-17 14:59:28 - CPU Temp: 60.3°C, RAM Usage: 22.34%, CPU Usage: 28.80%14          
2024-04-18 14:56:12,005 - piProperty - INFO - 2024-04-18 14:56:11 - CPU Temp: 62.3°C, RAM Usage: 33.13%, CPU Usage: 27.40%13
2024-04-19 14:55:45,997 - piProperty - INFO - 2024-04-19 14:55:45 - CPU Temp: 59.9°C, RAM Usage: 37.70%, CPU Usage: 25.70%14
2024-04-20 12:17:52,795 - piProperty - INFO - 2024-04-20 12:17:52 - CPU Temp: 61.3°C, RAM Usage: 41.46%, CPU Usage: 22.50%13                

注释

我非常确定,随着时间的推移,延迟的增加与线程的实现方式有关,因为将

thread_pool_executor.submit(toggleRelay1, outputPin, 'High', 5000, 1000, 1)
替换为阻塞
toggleRelay1(outputPin, 'High', 5000, 1000, 1)
,并且类似地,随着时间的推移,
thread_task
不再导致任何延迟通过。

问题

  1. 延迟可能是由线程的实现方式引起的吗?
  2. Python 中的线程模型是否会导致内存泄漏,即使有明显的正确管理和定期垃圾收集?
  3. 在长时间运行的应用程序中使用
    ThreadPoolExecutor
    时,是否有更好的管理内存和线程的实践?
  4. 如果我想让日志更新非阻塞,是否有更好的实现而不是使用
    ThreadPoolExecutor

如果需要更多可能有帮助的信息,如果您能给我留言,我将不胜感激,非常感谢!

python multithreading raspberry-pi gpio
1个回答
0
投票
  1. 我认为您不应该在每次运行该函数时都调用 setgpiomode 。它可能应该在启动时运行一次,除非引脚模式发生变化。
  2. update_logs_and_server 函数看起来像是在更新一个 json 文件,具体取决于您更新的频率和文件的大小,随着时间的推移,更新 json 文件会消耗更多的内存。它与仅附加到末尾的(例如日志纯文本文件)不同,它必须打开文件,加载和更改内容并保存它。甚至可以考虑使用数据库来记录日志,或者 postgressql - 它具有 json 支持。
© www.soinside.com 2019 - 2024. All rights reserved.