没有Spark的Kafka到Pandas数据帧

问题描述 投票:0回答:1

我正在读取kafka主题的流数据,我想将它的一些部分存储在pandas数据帧中。

from confluent_kafka import Consumer, KafkaError

c = Consumer({
    'bootstrap.servers': "###",
    'group.id': '###',
    'default.topic.config': {
'auto.offset.reset': 'latest' }
})

c.subscribe(['scorestore'])

while True:
    msg = c.poll(1.0)

    if msg is None:
        continue
    if msg.error():
        if msg.error().code() == KafkaError._PARTITION_EOF:
            continue
        else:
            print(msg.error())
            break

    print('Received message: {}'.format(msg.value().decode('utf-8')))

c.close()

收到的消息是json

{
  "messageHeader" : {
    "messageId" : "4b604b33-7256-47b6-89d6-eb1d92a282e6",
    "timestamp" : 152520000,
    "sourceHost" : "test",
    "sourceLocation" : "test",
    "tags" : [ ],
    "version" : "1.0"
  },
  "id_value" : {
    "id" : "1234",
    "value" : "333.0"
  }
}

我正在尝试创建一个具有时间戳,id和值列的数据帧

    timestamp   id  value
0   152520000   1234    333.0

有没有办法在不解析json消息的情况下完成此操作并将我需要的值逐行附加到数据帧?

python json pandas apache-kafka
1个回答
1
投票

我提出的解决方案可能有点棘手。想象一下,你在名为'msg_str'的字符串中有你的JSON消息:

import pandas as pd

msg_str = '{  "messageHeader" : { "messageId" : "4b604b33-7256-47b6-89d6-eb1d92a282e6",    "timestamp" : 152520000,    "sourceHost" : "test",    "sourceLocation" : "test",    "tags" : [ ],    "version" : "1.0"  },  "id_value" : {    "id" : "1234",    "value" : "333.0"  }}'


#first create a dataframe with read_json
p = pd.read_json(msg_str)
# Now you have a dataframe with two columns. Where a column has a value, the other 
# has a NaN. Now create a new column only with the values which are not 'NaN'
p['fussion'] = p['id_value'].fillna(p['messageHeader'])
# Delete columns 'id_value' and 'messageHeader' as you don't need them anymore
p = p[['fussion']].reset_index()
# Create a temporal column only to be the index to do a pivot
p['tmp'] = 0
# Do the pivot to convert rows into columns
p = p.pivot(index = 'tmp' ,values='fussion', columns='index')
# Finally get the columns that you are interested in
p = p.reset_index()[['timestamp','id','value']]

print(p)

结果:

index  timestamp    id value
0      152520000  1234   333

然后,您可以将此数据框附加到要累积结果的数据框中。

也许有一个最简单的解决方案,但我希望它可以帮助你,如果不是这样的话。

© www.soinside.com 2019 - 2024. All rights reserved.