Apache Flume从python脚本中获取数据

问题描述 投票:0回答:1

我正在运行一个python脚本来从新闻提供者收集数据并在flume.conf文件中获取此脚本。

My flume.conf file:

newsAgent.sources = r1
newsAgent.sinks = spark
newsAgent.channels = MemChannel

# Describe/configure the source
newsAgent.sources.r1.type = exec
newsAgent.sources.r1.command = python path_to/data_collector.py

# Describe the sink
newsAgent.sinks.spark.type = avro
newsAgent.sinks.spark.channel = memoryChannel
newsAgent.sinks.spark.hostname = localhost
newsAgent.sinks.spark.port = 4040

# Use a channel which buffers events in memory
newsAgent.channels.MemChannel.type = memory
newsAgent.channels.MemChannel.capacity = 10000
newsAgent.channels.MemChannel.transactionCapacity = 100

# Bind the source and sink to the channel
newsAgent.sources.r1.channels = MemChannel
newsAgent.sinks.spark.channel = MemChannel

日照中的python脚本运行正常,我可以看到json数据被打印出来。但是,当我通过水槽执行它并将数据下沉以引发低于警告信息时。

WARNING MESSAGES

18/08/04 07:36:20 WARN HttpParser: Illegal character 0x0 in state=START 
for buffer HeapByteBuffer@5ae61d8b[p=1,l=8192,c=8192,r=8191]= . {\x00<<<\x00\x00\x01\x00\x00\x00\x06\x00\x00\x000\x86\xAa\xDa\xE2\xC4T...ing town", "sum>>>}
18/08/04 07:36:20 WARN HttpParser: bad HTTP parsed: 400 Illegal character 0x0 for HttpChannelOverHttp@46691f53{r=0,c=false,a=IDLE,uri=null}

data_collector.py

def process():
    for k, v in news_source.items():
        feeds = feedparser.parse(v)
        for e in feeds.entries:
            doc = json.dumps(
                {"news_provider": k, "title": e.title.strip(), "summary": BeautifulSoup(e.summary, 'lxml').text.strip(),
                 "id": e.id.strip(), "published": e.published if e.has_key('published') else None})
            print("%s"%doc)

Streaming_script

def func():
    sc = SparkContext(master="local[*]", appName="App")
    ssc  = StreamingContext(sc, 300)
    flume_strm = FlumeUtils.createStream(ssc, "localhost", 9999)

    lines = flume_strm.map(lambda v: json.loads(v[1]))
    lines.pprint()
    ssc.start()
    ssc.awaitTermination()

Commands used

bin/flume-ng agent --conf conf --conf-file libexec/conf/test.conf --name Agent -Dflume.root.logger=INFO,console

spark-submit --packages org.apache.spark:spark-streaming-flume_2.11:2.2.0  path_to/streaming_script.py

我无法摆脱那些警告消息,我期望使用pprint()在spark日志中打印相同的json数据,稍后我可以相应地处理这些消息。

我在阅读流内容时是否缺少任何特定配置?我需要指定任何特定的编码器吗?

任何帮助赞赏。

python pyspark streaming flume
1个回答
0
投票

我一定看过和你一样的教程。我尝试了很多不同的选择。大多数都没有成功。但是我找到了一个解决方法:在你的flume.conf中使用一个exec源代码,然后像你一样调用脚本。但是在python脚本中,将数据写入文件。然后在脚本(data_collector.py)停止执行之前“cat”该文件。

我认为这是因为exec源需要“流”数据,而只是打印输出将无法正常工作。

我的设置与您的设置非常相似:

stream.py(为了便于理解,删除了逻辑):

from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.flume import FlumeUtils

if __name__ == "__main__":
    sc = SparkContext(appName="test");
    ssc = StreamingContext(sc, 30)
    stream = FlumeUtils.createStream(ssc, "127.0.0.1", 55555)
    stream.pprint()

这是我的data_collector.py(注意带有“cat”命令的最后一行):

#! /usr/bin/python
import requests
import random


class RandResp():
    def __init__(self):
        self.url = "https://swapi.co/api/people/"
        self.rand = str(random.randint(0, 17))
        self.r = requests.get(self.url + self.rand)

    def get_r(self):
        return(self.r.text)

if __name__ == "__main__":
    import os
    with open("exec.txt", "w") as file_in:
        file_in.write(RandResp().get_r())
    os.system("cat exec.txt")

这是我的flume.conf:

# list sources, sinks and channels in the agent
agent.sources = tail-file
agent.channels = c1
agent.sinks=avro-sink

# define the flow
agent.sources.tail-file.channels = c1
agent.sinks.avro-sink.channel = c1
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000

# define source and sink
agent.sources.tail-file.type = exec
agent.sources.tail-file.command =  python /home/james/Desktop/testing/data_collector.py
agent.sources.tail-file.channels = c1
agent.sinks.avro-sink.type = avro
agent.sinks.avro-sink.hostname = 127.0.0.1
agent.sinks.avro-sink.port = 55555

所以基本上在我的data_collector.py中,我只需要做任何需要完成的逻辑,将它写入一个名为exec.txt的文件,然后立即“cat”该文件。它有效...祝你好运

© www.soinside.com 2019 - 2024. All rights reserved.