为什么 Event 执行的 MySQL 查询在执行良好后却运行不佳?

问题描述 投票:0回答:1

你好,作为 Microsoft SQL 的 SQL 开发人员,我有相当多的经验,但作为 DBA 几乎没有经验,我才刚刚开始学习 MySQL。基本上,我有一个预定的存储过程,它可以正常运行几个小时,然后突然停止执行,运行速度慢了将近 30 倍。 (不是锁定/阻塞问题)

我在一台几乎没有任何活动的新服务器上生成大量随机测试数据,我设置为每 10 分钟运行一次 EVENT。我设置的事件执行一些非常基本的日志记录并执行两个存储过程,一个填充暂存表,另一个填充最终表(这更类似于数据在生产中进入系统的方式)。

活动

  • 每 10 分钟执行一次 2 个存储过程
  • 记录到表中运行了多长时间
  • 读取日志表,如果上次执行未完成则不执行
delimiter $$

CREATE EVENT Score_Stage_Processing_ANDTEST
ON SCHEDULE EVERY 10 minute
STARTS CURRENT_TIMESTAMP 
ON COMPLETION NOT PRESERVE
ENABLE
DO
BEGIN 

    set @ProcName = 'Score_Stage_Processing_ANDTEST';
    set @EndDate = (
        select EndDate 
        from Event_Log el 
        where Name = @ProcName
        order by StartDate desc     
        limit 1);
    set @StartDate = (
        select StartDate 
        from Event_Log el 
        where Name = @ProcName
        order by StartDate desc
        limit 1);
        
    -- Only execute if last execution was successful.
    IF ((@StartDate is not null and @EndDate is not null) or (@StartDate is null and @EndDate is null))
    THEN    
        INSERT INTO Event_Log(Name, StartDate, EndDate)
        VALUES(@ProcName, now(), null);
       
        Set @ID = Last_Insert_ID();
    
        set bulk_insert_buffer_size = 1024*1024*256; -- default 1024*1024*8
        call test_create_scores(1000);
        call Score_Stage_Processing();

        update Event_Log
        set EndDate = now()
        where ID = @ID;
        
    END IF;
  
end $$
delimiter ; 

存储过程 1

  • 生成 70k 条随机记录并将它们放入临时表中进行处理
CREATE DEFINER=`root`@`localhost` PROCEDURE `test_create_scores`(
    IN in_NumInsertS int
)
sp: BEGIN

    DECLARE i INT DEFAULT 1;    

    set @max = in_NumInsertS;
    
    while i <= @max
    DO
    
    Set @STD = 5000;
    Set @Mean = 20000;
    
    -- 20 random levels Unbreaking New
    insert into stg_Score_Pending (LevelID, SteamID, Score, Stress, isUnbreaking)
    select LevelID 
        , FLOOR(RAND() * (1000000000-100000000) + 100000000) as SteamID -- pretty much always new people
        , floor(((RAND() * 2 - 1) + (RAND() * 2 - 1) + (RAND() * 2 - 1)) * @STD + @Mean) as RandScore
        , FLOOR(RAND() * (9900-6000) + 6000) as Stress -- between 60 and 99
        , 1 as isUnbreaking
    from Level
    where LevelType = 'Campaign'
    order by rand()
    limit 40;
    
    -- 15 random levels breaking new players
    insert into stg_Score_Pending (LevelID, SteamID, Score, Stress, isUnbreaking)
    select LevelID 
        , FLOOR(RAND() * (1000000000-100000000) + 100000000) as SteamID -- pretty much always new people
        , floor(((RAND() * 2 - 1) + (RAND() * 2 - 1) + (RAND() * 2 - 1)) * @STD + @Mean) as RandScore
        , 10000 as Stress -- between 60 and 99
        , 0 as isUnbreaking
    from Level
    where LevelType = 'Campaign'
    order by rand()
    limit 30;
    

    SET i = i + 1;
    end while;

    leave sp;

    
END;

存储过程 2

  • 根据需要从暂存中删除重复记录
  • 插入或更新记录两个 2 个不同的表(~70k 到两个不同的表)
CREATE DEFINER=`root`@`localhost` PROCEDURE `score_stage_processing`()
BEGIN

    set @BatchSize = 10000;
    set @BatchCount = 200;
    
    set @InitialMax = (select max(ID) from `stg_Score_Pending`);
    set @m = 2147483647;

    -- batches and caps number of updates
    set @MinID = (select min(ID) from `stg_Score_Pending`);
    set @MaxID = @minID + @BatchSize;

    while @BatchCount > 0 and @InitialMax > @MaxID - @BatchSize
    do

        -- Identify Pending Miniumum Stress and Score
            create temporary table if not exists tmp_ScoreBudgetStress
                (primary key tmp_stress_pkey (LevelID, SteamID))
            select ssp.LevelID 
                , ssp.SteamID 
                , case when min(ssp.Score) < ifnull(min(sb.Score),@m) Then min(ssp.Score) else min(sb.Score) end as MinScore
                , case when min(ssp.Stress) < ifnull(min(ss.Score),@m) then min(ssp.Stress) else min(ss.Score) end as MinStress
            from stg_Score_Pending ssp 
                left join Score_Budget sb on sb.LevelID = ssp.LevelID -- This prevents INCREASING the score  
                    and sb.SteamID = ssp.SteamID 
                    and sb.Score < ssp.Score 
                left join Score_Stress ss on ss.LevelID  = ssp.LevelID -- This prevents INCREASING the score
                    and ss.SteamID  = ssp.SteamID 
                    and ss.Score  < sb.Score 
            where ssp.id <= @MaxID 
            group by ssp.LevelID, ssp.SteamID;
        
        
        -- Identify Pending Minimum Unbreaking
            create temporary table if not exists tmp_ScoreUnbreakingBudget
                (primary key tmp_budget_pkey (LevelID, SteamID))
            select ssp.LevelID 
                , ssp.SteamID 
                , case when min(ssp.Score) < ifnull(min(sb.Score),@m) Then min(ssp.Score) else min(sb.Score) end as MinUnbreakingScore
            from stg_Score_Pending ssp 
                left join Score_Budget sb on sb.LevelID = ssp.LevelID -- This prevents INCREASING the score  
                    and sb.SteamID = ssp.SteamID 
                    and sb.Score < ssp.Score 
            where ssp.id <= @MaxID 
                and ssp.isUnbreaking = 1
            group by ssp.LevelID, SteamID;
        
        -- Updates to SCORE BUDGET
        
            update Score_Budget sb 
                inner join tmp_ScoreBudgetStress s on s.LevelID = sb.LevelID -- inner join serves as existance check (update all scores that exists in table already)
                    and s.SteamID = sb.SteamID 
                left join tmp_ScoreUnbreakingBudget u on u.LevelID = sb.LevelID  
                    and u.SteamID = sb.SteamID
            set sb.Score = s.MinScore
                , sb.ScoreUnbreaking = u.MinUnbreakingScore
                , sb.hasNoUnbreaking = case when u.MinUnbreakingScore is null then 1 else 0 end;
         
            insert into Score_Budget (LevelID, SteamID, Score, ScoreUnbreaking, hasNoUnbreaking, SampleKey)
            select s.LevelID
                , s.SteamID
                , s.MinScore
                , u.MinUnbreakingScore
                , case when u.MinUnbreakingScore is null then 1 else 0 end
                , case floor(rand() * 10) 
                     when 0 then 1 -- 10%
                     when 1 then 2 -- 30%
                     when 2 then 2
                     when 3 then 2
                     when 4 then 3 -- 60%
                     when 5 then 3
                     when 6 then 3
                     when 7 then 3
                     when 8 then 3
                     when 9 then 3
                     end as SampleKey
            from tmp_ScoreBudgetStress s
                left join tmp_ScoreUnbreakingBudget u on u.LevelID = s.LevelID  
                    and u.SteamID = s.SteamID
            where not exists (
                select 1
                from Score_Budget sb
                where sb.LevelID  = s.LevelID 
                    and sb.SteamID  = s.SteamID
                );
            
        -- Updates to SCORE STRESS
            update Score_Stress ss 
                inner join tmp_ScoreBudgetStress s on s.LevelID = ss.LevelID -- inner join serves as existance check (update all scores that exists in table already)
                    and s.SteamID = ss.SteamID 
                left join tmp_ScoreUnbreakingBudget u on u.LevelID = ss.LevelID  
                    and u.SteamID = ss.SteamID
            set ss.Score = s.MinStress;
            
            insert into Score_Stress (LevelID, SteamID, Score, SampleKey)
            select s.LevelID
                , s.SteamID
                , s.MinStress
                , case floor(rand() * 10) 
                     when 0 then 1 -- 10%
                     when 1 then 2 -- 30%
                     when 2 then 2
                     when 3 then 2
                     when 4 then 3 -- 60%
                     when 5 then 3
                     when 6 then 3
                     when 7 then 3
                     when 8 then 3
                     when 9 then 3
                     end as SampleKey
            from tmp_ScoreBudgetStress s 
                left join tmp_ScoreUnbreakingBudget u on u.LevelID = s.LevelID  
                    and u.SteamID = s.SteamID
            where not exists (
                select 1
                from Score_Stress ss
                where ss.LevelID  = s.LevelID
                    and ss.SteamID  = s.SteamID
                );
        
        -- Clear Out Staging Table
            
            Delete d From stg_Score_Pending d Where id <= @MaxID;       
            
        -- Drop temporary tables
            drop temporary table if exists tmp_ScoreBudgetStress;
            drop temporary table if exists tmp_ScoreUnbreakingBudget;   
        
        set @MaxID = @MaxID + @BatchSize;
        set @BatchCount = @BatchCount - 1;
    end while;
    
    
END;

主要问题 日志表显示事件开始和结束很快,然后突然开始花费大量时间。例如,我最后一次尝试事件在大约 30 秒内成功运行。然后事件突然开始每次执行需要 15 分钟。 (我有特殊处理以确保它在运行时不会启动) SS of Custom Event Log Showing fast execution then slow

在事件开始缓慢运行后,我必须停止事件几个小时不运行作业,然后稍后再试。除了等待并重试(通常是第二天)之外,我不知道我需要做什么来立即修复它

我的猜测 我觉得服务器正在做两件事之一

  1. 服务器得到一个糟糕的执行计划。添加越来越多的行后,表统计信息变得过时,MySQL 无法找到好的计划。我尝试将
    analyze table
    添加到事件中,但这似乎并没有重置问题或阻止它发生。
  2. 一些内存缓冲区已满,我需要等待它被刷新。我试过将变量
    bulk_insert_buffer_size
    从 8MB 增加到 256MB 但没有效果。我还在事件中添加了 set 命令以确保它保持更新。

注意: 没有锁定表,这是服务器上运行的唯一进程,除了我之外没有人连接到它。当我检查

show full processlist
运行缓慢时,没有其他进程在运行

我怀疑我需要更改某些服务器配置或需要清除某种缓存以防止突然变慢。

到目前为止,我主要只是尝试编辑几个不同的变量。我也试过重启服务器,刷新我知道的缓冲区,分析变化很大的表。

    set bulk_insert_buffer_size = 1024*1024*256; -- 256mb default 1024*1024*8
    set persist key_buffer_size = 1024*1024*1024; -- 1gb default 1024*1024*16  (recommends 25 to 30 percent of total memory on server)
    set innodb_buffer_pool_size = 1024*1024*1024*13; -- 13gb default 1024*1024*128

感谢您的帮助和时间!

编辑:DDLs

CREATE TABLE `stg_Score_Pending` (
  `ID` bigint NOT NULL AUTO_INCREMENT,
  `LevelID` varchar(20) NOT NULL,
  `SteamID` bigint NOT NULL,
  `Score` int NOT NULL,
  `isUnbreaking` bit(1) NOT NULL,
  `Stress` int NOT NULL,
  PRIMARY KEY (`ID`),
  KEY `ix_stg_Score_Pending_LevelID_SteamID` (`LevelID`,`SteamID`)
) ENGINE=InnoDB AUTO_INCREMENT=16948201 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=COMPRESSED;

CREATE TABLE `Score_Budget` (
  `ID` int NOT NULL AUTO_INCREMENT,
  `LevelID` varchar(20) NOT NULL,
  `SteamID` bigint NOT NULL,
  `Score` int NOT NULL,
  `ScoreUnbreaking` int DEFAULT NULL,
  `hasNoUnbreaking` bit(1) NOT NULL,
  `SampleKey` tinyint NOT NULL,
  PRIMARY KEY (`ID`),
  UNIQUE KEY `ux_Score_Budget_LevelID_SteamID` (`LevelID`,`SteamID`),
  KEY `ix_Score_Budget_LevelID_unbreaking` (`LevelID`,`SampleKey`,`hasNoUnbreaking`,`ScoreUnbreaking`),
  KEY `ix_Score_Budget_LevelID_overall` (`LevelID`,`Score`)
) ENGINE=InnoDB AUTO_INCREMENT=14067791 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=COMPRESSED;

CREATE TABLE `Score_Stress` (
  `ID` int NOT NULL AUTO_INCREMENT,
  `LevelID` varchar(20) NOT NULL,
  `SteamID` bigint NOT NULL,
  `Score` int NOT NULL,
  `SampleKey` tinyint NOT NULL,
  PRIMARY KEY (`ID`),
  UNIQUE KEY `ux_Score_Stress_LevelID_SteamID` (`LevelID`,`SteamID`),
  KEY `ix_Score_Stress_LevelID_overall` (`LevelID`,`SampleKey`,`Score`)
) ENGINE=InnoDB AUTO_INCREMENT=14067791 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=COMPRESSED;
mysql database-performance database-administration
1个回答
0
投票

我怀疑你正在使用MyISAM,这就是问题的根源。更改为 InnoDB 并将

key_buffer_size
降低到 20M 和
innodb_buffer_pool_size
到可用 RAM 的 70%。

MyISAM 评论和问题

  • MyISAM 在大表上进行大量删除时会出现问题。表格可能会变得支离破碎;甚至行都可能变得零散。

  • 多少内存?请记住,

    key_buffer
    仅适用于 MyISAM 索引。 (包括主索引、唯一索引和普通索引。)

InnoDB 评论和问题

  • innodb_buffer_pool_size
    的价值是多少?

  • autocommit
    的价值是多少?事件周围有任何交易的东西吗? (
    BEGIN...COMMIT
    )也许应该有?

对于任一引擎

  • 如果任何一张桌子的尺寸继续增长,这可能会导致突然减速。

  • 几对

    SET @.. = ( SELECT ... )
    可以变成一个
    SELECT .. INTO @this, @that ...
    .

  • UPDATE/INSERT 会与单个

    INSERT ... ON DUPLICATE KEY UPDATE ...
    一起工作吗?

  • 如果那个

    DELETE
    真的是“清理桌子”,那么用
    TRUNCATE
    代替。

  • 如果

    Score_Budget
    在LevelId和StreamId两列上有一个唯一键,那么你可以去掉
    EXISTS
    子句,把
    INSERT
    改成
    INSERT IGNORE

  • 请为每桌提供

    SHOW CREATE TABLE
    。指数可能导致部分放缓。

© www.soinside.com 2019 - 2024. All rights reserved.