使用pyspark将数据从kafka写入蜂巢-卡住

问题描述 投票:0回答:1

我刚起步并从pyspark开始,我正在学习使用pyspark将数据从kafka推送到配置单元。

from pyspark.sql import SparkSession
from pyspark.sql.functions import explode
from pyspark.sql.functions import *
from pyspark.streaming.kafka import KafkaUtils
from os.path import abspath

warehouseLocation = abspath("spark-warehouse")

spark = SparkSession.builder.appName("sparkstreaming").getOrCreate()

df = spark.read.format("kafka").option("startingoffsets", "earliest").option("kafka.bootstrap.servers", "kafka-server1:66,kafka-server2:66").option("kafka.security.protocol", "SSL").option("kafka.ssl.keystore.location", "mykeystore.jks").option("kafka.ssl.keystore.password","mykeystorepassword").option("subscribe","json_stream").load().selectExpr("CAST(value AS STRING)")

json_schema = df.schema

df1 = df.select($"value").select(from_json,json_schema).alias("data").select("data.*")

以上操作无效,但是提取数据后,我想将数据插入到配置单元表中。

由于我是新手,正在寻求帮助。预先赞赏! :)

pyspark spark-streaming spark-streaming-kafka
1个回答
0
投票

您可以使用]插入数据>

df = hiveContext.sql("select 1 as id, 10 as score")
df.write.mode("append").saveAsTable("my_table")
© www.soinside.com 2019 - 2024. All rights reserved.