通过多个字段聚合并映射到一个结果

问题描述 投票:0回答:1

对于从票证系统流式传输的数据,我们尝试实现以下目标

获取按状态和客户分组的打开票证数。简化的模式如下


 Field               | Type                      
-------------------------------------------------
 ROWTIME             | BIGINT           (system) 
 ROWKEY              | VARCHAR(STRING)  (system) 
 ID                  | BIGINT                    
 TICKET_ID           | BIGINT                    
 STATUS              | VARCHAR(STRING)           
 TICKETCATEGORY_ID   | BIGINT                    
 SUBJECT             | VARCHAR(STRING)           
 PRIORITY            | VARCHAR(STRING)           
 STARTTIME           | BIGINT                    
 ENDTIME             | BIGINT                    
 CHANGETIME          | BIGINT                    
 REMINDTIME          | BIGINT                    
 DEADLINE            | INTEGER                   
 CONTACT_ID          | BIGINT           

我们希望使用该数据来获取每个客户具有特定状态(打开,等待,进行中等)的票证数量。这个数据在另一个主题中有一条消息 - 该方案可能看起来像那样

 Field               | Type                      
-------------------------------------------------
 ROWTIME             | BIGINT           (system) 
 ROWKEY              | VARCHAR(STRING)  (system) 
 CONTACT_ID          | BIGINT                    
 COUNT_OPEN          | BIGINT                    
 COUNT_WAITING       | BIGINT                    
 COUNT_CLOSED        | BIGINT                    

我们计划使用此数据和其他数据来丰富客户信息并将丰富的数据集发布到外部系统(例如elasticsearch)

获得第一部分非常容易 - 按客户和状态对门票进行分组。

select contact_id,status count(*) cnt from tickets group by contact_id,status;

但现在我们陷入困境 - 我们每个客户获得多行/消息,而我们只是不知道如何将contact_id作为关键字转换为一条消息。

我们试过加入但我们所有的尝试都没有导致任何结果。

为客户创建状态为“等待”的所有故障单创建表

create table waiting_tickets_by_cust with (partitions=12,value_format='AVRO')
as select contact_id, count(*) cnt from tickets where status='waiting' group by contact_id;

重新加入密钥表

CREATE TABLE T_WAITING_REKEYED with WITH (KAFKA_TOPIC='WAITING_TICKETS_BY_CUST',
       VALUE_FORMAT='AVRO',
       KEY='contact_id');

左(外)将该表与我们的客户表连接,可以让我们所有有票等待的客户。

select c.id,w.cnt wcnt from T_WAITING_REKEYED w left join CRM_CONTACTS c on w.contact_id=c.id;

但是我们需要所有客户,等待计数为NULL,以便在状态处理中使用票证的另一个联接。由于我们只有等待的客户,因此只能获得具有两种状态值的客户。

ksql> select c.*,t.cnt from T_PROCESSING_REKEYED t left join cust_ticket_tmp1 c on t.contact_id=c.id;
null | null | null | null | 1
1555261086669 | 1472 | 1472 | 0 | 1
1555261086669 | 1472 | 1472 | 0 | 1
null | null | null | null | 1
1555064371937 | 1474 | 1474 | 1 | 1
null | null | null | null | 1
1555064371937 | 1474 | 1474 | 1 | 1
null | null | null | null | 1
null | null | null | null | 1
null | null | null | null | 1
1555064372018 | 3 | 3 | 5 | 6
1555064372018 | 3 | 3 | 5 | 6

那么这样做的正确方法是什么?

这是KSQL 5.2.1

谢谢

编辑:

这是一些示例数据

创建了一个TOPIC,将数据限制为测试帐户

CREATE STREAM tickets_filtered
  WITH (
        PARTITIONS=12,
        VALUE_FORMAT='JSON') AS
  SELECT id,
         contact_id,
subject,
status,

         TIMESTAMPTOSTRING(changetime, 'yyyy-MM-dd HH:mm:ss.SSS') AS timestring
  FROM tickets where contact_id=1472
  PARTITION BY contact_id;

00:06:44 1 $ kafkacat-dev -C -o beginning -t TICKETS_FILTERED
{"ID":2216,"CONTACT_ID":1472,"SUBJECT":"Test Bodenbach","STATUS":"closed","TIMESTRING":"2012-11-08 10:34:30.000"}
{"ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"waiting","TIMESTRING":"2019-04-16 23:07:01.000"}
{"ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"processing","TIMESTRING":"2019-04-16 23:52:08.000"}
Changing and adding something in the ticketing-system...
{"ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"waiting","TIMESTRING":"2019-04-17 00:10:38.000"}
{"ID":8952,"CONTACT_ID":1472,"SUBJECT":"another sync ticket","STATUS":"new","TIMESTRING":"2019-04-17 00:11:23.000"}
{"ID":8952,"CONTACT_ID":1472,"SUBJECT":"another sync ticket","STATUS":"close-request","TIMESTRING":"2019-04-17 00:12:04.000"}

我们想要从那些消息看起来像这样的数据中创建一个主题

{"CONTACT_ID":1472,"TICKETS_CLOSED":1,"TICKET_WAITING":1,"TICKET_CLOSEREQUEST":1,"TICKET_PROCESSING":0}
apache-kafka ksql
1个回答
0
投票

(Qazxswpoi)

可以通过构建表(用于状态)然后在该表上构建聚合来实现此目的。

  1. 设置测试数据 written up here too
  2. 预览主题数据 kafkacat -b localhost -t tickets -P <<EOF {"ID":2216,"CONTACT_ID":1472,"SUBJECT":"Test Bodenbach","STATUS":"closed","TIMESTRING":"2012-11-08 10:34:30.000"} {"ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"waiting","TIMESTRING":"2019-04-16 23:07:01.000"} {"ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"processing","TIMESTRING":"2019-04-16 23:52:08.000"} {"ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"waiting","TIMESTRING":"2019-04-17 00:10:38.000"} {"ID":8952,"CONTACT_ID":1472,"SUBJECT":"another sync ticket","STATUS":"new","TIMESTRING":"2019-04-17 00:11:23.000"} {"ID":8952,"CONTACT_ID":1472,"SUBJECT":"another sync ticket","STATUS":"close-request","TIMESTRING":"2019-04-17 00:12:04.000"} EOF
  3. 注册流 ksql> PRINT 'tickets' FROM BEGINNING; Format:JSON {"ROWTIME":1555511270573,"ROWKEY":"null","ID":2216,"CONTACT_ID":1472,"SUBJECT":"Test Bodenbach","STATUS":"closed","TIMESTRING":"2012-11-08 10:34:30.000"} {"ROWTIME":1555511270573,"ROWKEY":"null","ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"waiting","TIMESTRING":"2019-04-16 23:07:01.000"} {"ROWTIME":1555511270573,"ROWKEY":"null","ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"processing","TIMESTRING":"2019-04-16 23:52:08.000"} {"ROWTIME":1555511270573,"ROWKEY":"null","ID":8945,"CONTACT_ID":1472,"SUBJECT":"sync-test","STATUS":"waiting","TIMESTRING":"2019-04-17 00:10:38.000"} {"ROWTIME":1555511270573,"ROWKEY":"null","ID":8952,"CONTACT_ID":1472,"SUBJECT":"another sync ticket","STATUS":"new","TIMESTRING":"2019-04-17 00:11:23.000"} {"ROWTIME":1555511270573,"ROWKEY":"null","ID":8952,"CONTACT_ID":1472,"SUBJECT":"another sync ticket","STATUS":"close-request","TIMESTRING":"2019-04-17 00:12:04.000"}
  4. 查询数据 CREATE STREAM TICKETS (ID INT, CONTACT_ID VARCHAR, SUBJECT VARCHAR, STATUS VARCHAR, TIMESTRING VARCHAR) WITH (KAFKA_TOPIC='tickets', VALUE_FORMAT='JSON');
  5. 此时我们可以使用ksql> SET 'auto.offset.reset' = 'earliest'; ksql> SELECT * FROM TICKETS; 1555502643806 | null | 2216 | 1472 | Test Bodenbach | closed | 2012-11-08 10:34:30.000 1555502643806 | null | 8945 | 1472 | sync-test | waiting | 2019-04-16 23:07:01.000 1555502643806 | null | 8945 | 1472 | sync-test | processing | 2019-04-16 23:52:08.000 1555502643806 | null | 8945 | 1472 | sync-test | waiting | 2019-04-17 00:10:38.000 1555502643806 | null | 8952 | 1472 | another sync ticket | new | 2019-04-17 00:11:23.000 1555502643806 | null | 8952 | 1472 | another sync ticket | close-request | 2019-04-17 00:12:04.000 来聚合聚合: CASE 但是,你会注意到答案并不像预期的那样。这是因为我们计算了所有六个输入事件。 让我们来看一张票,ID SELECT CONTACT_ID, SUM(CASE WHEN STATUS='new' THEN 1 ELSE 0 END) AS TICKETS_NEW, SUM(CASE WHEN STATUS='processing' THEN 1 ELSE 0 END) AS TICKETS_PROCESSING, SUM(CASE WHEN STATUS='waiting' THEN 1 ELSE 0 END) AS TICKETS_WAITING, SUM(CASE WHEN STATUS='close-request' THEN 1 ELSE 0 END) AS TICKETS_CLOSEREQUEST , SUM(CASE WHEN STATUS='closed' THEN 1 ELSE 0 END) AS TICKETS_CLOSED FROM TICKETS GROUP BY CONTACT_ID; 1472 | 1 | 1 | 2 | 1 | 1 -这经历了三个状态变化(8945 - > waiting - > processing),每个变化都包含在汇总中。我们可以使用简单的谓词对此进行如下验证: waiting
  6. 我们真正想要的是每张票的当前状态。因此,重新分配票证ID上的数据: SELECT CONTACT_ID, SUM(CASE WHEN STATUS='new' THEN 1 ELSE 0 END) AS TICKETS_NEW, SUM(CASE WHEN STATUS='processing' THEN 1 ELSE 0 END) AS TICKETS_PROCESSING, SUM(CASE WHEN STATUS='waiting' THEN 1 ELSE 0 END) AS TICKETS_WAITING, SUM(CASE WHEN STATUS='close-request' THEN 1 ELSE 0 END) AS TICKETS_CLOSEREQUEST , SUM(CASE WHEN STATUS='closed' THEN 1 ELSE 0 END) AS TICKETS_CLOSED FROM TICKETS WHERE ID=8945 GROUP BY CONTACT_ID; 1472 | 0 | 1 | 2 | 0 | 0
  7. 比较事件流与当前状态 事件流(KSQL流) CREATE STREAM TICKETS_BY_ID AS SELECT * FROM TICKETS PARTITION BY ID; CREATE TABLE TICKETS_TABLE (ID INT, CONTACT_ID INT, SUBJECT VARCHAR, STATUS VARCHAR, TIMESTRING VARCHAR) WITH (KAFKA_TOPIC='TICKETS_BY_ID', VALUE_FORMAT='JSON', KEY='ID'); 当前状态(KSQL表) ksql> SELECT ID, TIMESTRING, STATUS FROM TICKETS; 2216 | 2012-11-08 10:34:30.000 | closed 8945 | 2019-04-16 23:07:01.000 | waiting 8945 | 2019-04-16 23:52:08.000 | processing 8945 | 2019-04-17 00:10:38.000 | waiting 8952 | 2019-04-17 00:11:23.000 | new 8952 | 2019-04-17 00:12:04.000 | close-request
  8. 我们想要一个表的聚合 - 我们想要运行与上面相同的ksql> SELECT ID, TIMESTRING, STATUS FROM TICKETS_TABLE; 2216 | 2012-11-08 10:34:30.000 | closed 8945 | 2019-04-17 00:10:38.000 | waiting 8952 | 2019-04-17 00:12:04.000 | close-request 技巧,但是基于每张票的当前状态,而不是每个事件: SUM(CASE…)…GROUP BY 这给了我们想要的东西: SELECT CONTACT_ID, SUM(CASE WHEN STATUS='new' THEN 1 ELSE 0 END) AS TICKETS_NEW, SUM(CASE WHEN STATUS='processing' THEN 1 ELSE 0 END) AS TICKETS_PROCESSING, SUM(CASE WHEN STATUS='waiting' THEN 1 ELSE 0 END) AS TICKETS_WAITING, SUM(CASE WHEN STATUS='close-request' THEN 1 ELSE 0 END) AS TICKETS_CLOSEREQUEST , SUM(CASE WHEN STATUS='closed' THEN 1 ELSE 0 END) AS TICKETS_CLOSED FROM TICKETS_TABLE GROUP BY CONTACT_ID;
  9. 让我们将另一个故障单的事件提供给主题,并观察表的状态如何变化。当状态发生变化时,表中的行会重新发出;你也可以取消 1472 | 0 | 0 | 1 | 1 | 1 并重新运行它以仅查看当前状态。 为自己尝试的示例数据: SELECT

如果你想进一步尝试这个,你可以使用{"ID":8946,"CONTACT_ID":42,"SUBJECT":"","STATUS":"new","TIMESTRING":"2019-04-16 23:07:01.000"} {"ID":8946,"CONTACT_ID":42,"SUBJECT":"","STATUS":"processing","TIMESTRING":"2019-04-16 23:07:01.000"} {"ID":8946,"CONTACT_ID":42,"SUBJECT":"","STATUS":"waiting","TIMESTRING":"2019-04-16 23:07:01.000"} {"ID":8946,"CONTACT_ID":42,"SUBJECT":"","STATUS":"processing","TIMESTRING":"2019-04-16 23:07:01.000"} {"ID":8946,"CONTACT_ID":42,"SUBJECT":"","STATUS":"waiting","TIMESTRING":"2019-04-16 23:07:01.000"} {"ID":8946,"CONTACT_ID":42,"SUBJECT":"","STATUS":"closed","TIMESTRING":"2019-04-16 23:07:01.000"} {"ID":8946,"CONTACT_ID":42,"SUBJECT":"","STATUS":"close-request","TIMESTRING":"2019-04-16 23:07:01.000"} 生成一个额外的虚拟数据流,通过Mockaroo传送来减慢速度,这样你就可以看到每条消息到达时对生成的聚合的影响:

awk
© www.soinside.com 2019 - 2024. All rights reserved.