带有 EventHub 的 Delta 实时表

问题描述 投票:0回答:2

我正在尝试使用增量实时表从 eventhub 创建流,但我在安装库时遇到问题。是否可以使用 sh /pip 使用 Delta Live 表安装 maven 库?

我要安装 com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.17

https://learn.microsoft.com/pl-pl/azure/databricks/spark/latest/structed-streaming/streaming-event-hubs

pyspark databricks azure-eventhub delta-live-tables
2个回答
7
投票

目前无法对 Delta Live Tables 使用外部连接器/Java 库。但对于 EventHubs,有一个解决方法 - 您可以使用内置的 Kafka 连接器连接到 EventHubs - 您只需指定正确的选项,如文档中所述:

@dlt.table
def eventhubs():
  readConnectionString="Endpoint=sb://<....>.windows.net/;?.."
  eh_sasl = f'kafkashaded.org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{readConnectionString}";'
  kafka_options = {
     "kafka.bootstrap.servers": "<eh-ns-name>.servicebus.windows.net:9093",
     "kafka.sasl.mechanism": "PLAIN",
     "kafka.security.protocol": "SASL_SSL",
     "kafka.request.timeout.ms": "60000",
     "kafka.session.timeout.ms": "30000",
     "startingOffsets": "earliest",
     "kafka.sasl.jaas.config": eh_sasl,
     "subscribe": "<topic-name>",
  }
  return spark.readStream.format("kafka") \ 
    .options(**kafka_options).load()

0
投票

使用 Azure 事件中心作为源设置 DLT 管道。 Python 用于从事件中心读取数据的青铜表。白银和黄金表的 SQL:

在事件中心我发送以下 json: 确保它不是一个列表。

{
    "id": "2",
    "name": "xyz1"
}

请参阅 Databricks Docs 了解如何在 python 中设置青铜层。

Python 笔记本

import dlt
import pyspark.sql.types as T
from pyspark.sql.functions import *

# Event Hubs configuration
EH_NAMESPACE                    = "xyz-eventhub1"
EH_NAME                         = "abc"

EH_CONN_SHARED_ACCESS_KEY_NAME  = "RootManageSharedAccessKey"
# SECRET_SCOPE                    = spark.conf.get("io.ingestion.eh.secretsScopeName")
EH_CONN_SHARED_ACCESS_KEY_VALUE = "xyz="

EH_CONN_STR                     = f"Endpoint=sb://{EH_NAMESPACE}.servicebus.windows.net/;SharedAccessKeyName={EH_CONN_SHARED_ACCESS_KEY_NAME};SharedAccessKey={EH_CONN_SHARED_ACCESS_KEY_VALUE}"

# Kafka Consumer configuration

KAFKA_OPTIONS = {
  "kafka.bootstrap.servers"  : f"{EH_NAMESPACE}.servicebus.windows.net:9093",
  "subscribe"                : EH_NAME,
  "kafka.sasl.mechanism"     : "PLAIN",
  "kafka.security.protocol"  : "SASL_SSL",
  "kafka.sasl.jaas.config"   : f"kafkashaded.org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$ConnectionString\" password=\"{EH_CONN_STR}\";",
  "kafka.request.timeout.ms" : "60000",
  "kafka.session.timeout.ms" : "60000",
#   "maxOffsetsPerTrigger"     : spark.conf.get("iot.ingestion.spark.maxOffsetsPerTrigger"),
  "failOnDataLoss"           : "false",
  "startingOffsets"          : "earliest"
}

# PAYLOAD SCHEMA
payload_ddl = """id STRING, name STRING"""
payload_schema = T._parse_datatype_string(payload_ddl)

# Basic record parsing and adding ETL audit columns
def parse(df):
  print(df)
  return (df
    .withColumn("records", col("value").cast("string"))
    .withColumn("parsed_records", from_json(col("records"), payload_schema))
    # .withColumn("iot_event_timestamp", expr("cast(from_unixtime(parsed_records.timestamp / 1000) as timestamp)"))
    .withColumn("id", expr("parsed_records.id"))
    .withColumn("name", expr("parsed_records.name"))
    .withColumn("eh_enqueued_timestamp", expr("timestamp")) # when event was enqueued
    .withColumn("eh_enqueued_date", expr("to_date(timestamp)"))
    .withColumn("bronze_timestamp", col("current_timestamp"))
    .withColumn("bronze_uuid", expr("uuid()"))
    .drop("records", "value", "key")
  )

@dlt.create_table(
  comment="Raw events from kafka",
  table_properties={
    "quality": "bronze",
    "pipelines.reset.allowed": "false" # preserves the data in the delta table if you do full refresh
  },
  partition_cols=["eh_enqueued_date"]
)
@dlt.expect("valid_topic", "topic IS NOT NULL")
@dlt.expect("valid records", "parsed_records IS NOT NULL")
def kafka_bronze():
  return (
   spark.readStream
    .format("kafka")
    .options(**KAFKA_OPTIONS)
    .load()
    .transform(parse)
  )

Sql 笔记本

银桌

CREATE STREAMING LIVE TABLE kafka_cleaned(
  CONSTRAINT id_not_null EXPECT (id IS NOT NULL)
)
COMMENT "Cleaned kafka table"
TBLPROPERTIES ("companyPipeline.quality" = "silver")
AS
SELECT 
cast(id as int) as id,
name,
eh_enqueued_timestamp,
bronze_timestamp,
CURRENT_TIMESTAMP as silver_timestamp
FROM STREAM(LIVE.kafka_bronze) 

黄金桌

CREATE STREAMING LIVE TABLE kafka_gold
COMMENT "count of id per name"
TBLPROPERTIES ("companyPipeline.quality" = "gold")
AS
SELECT count(id) as count_id,
name,
CURRENT_TIMESTAMP as gold_timestamp 
FROM STREAM(LIVE.kafka_cleaned)
group by name

使用这两个笔记本作为 DLT 管道中的源

© www.soinside.com 2019 - 2024. All rights reserved.