我希望将事务数据集转入 SCD2,以捕获枢轴粒度上每个组合有效的时间间隔。
Snowflake 是我正在使用的实际 DBMS,但也标记了 Oracle,因为它们的方言几乎相同。不过,我可能可以找到为任何 DBMS 提供的解决方案。
我有工作sql,但它是从反复试验中诞生的,我觉得必须有一种我所缺少的更优雅的方式,因为它非常丑陋且计算成本昂贵。
(注意:输入数据中的第二条记录“过期”第一条记录。可以假设感兴趣的每一天都会作为 add_dts 至少出现一次。) (在最后添加为图像,直到我弄清楚标记不起作用的原因)
输入:
原_谷物 | 枢轴_谷物 | 数据透视_列 | 数据透视属性 | 添加_TS |
---|---|---|---|---|
OG-1 | PG-1 | 第一_列 | A | 2020-01-01 |
OG-1 | PG-1 | 第一_列 | B | 2020-01-02 |
OG-2 | PG-1 | 第二_Col | A | 2020-01-01 |
OG-3 | PG-1 | 第三_栏 | C | 2020-01-02 |
OG-3 | PG-1 | 第三_栏 | B | 2020-01-03 |
输出:
枢轴_谷物 | 第一_列 | 第二_Col | 第三_栏 | 来自_Dt | 致_Dt |
---|---|---|---|---|---|
PG-1 | A | A | 空 | 2020-01-01 | 2020-01-02 |
PG-1 | B | A | C | 2020-01-02 | 2020-01-03 |
PG-1 | B | A | B | 2020-01-03 | 9999-01-01 |
WITH INPUT AS
( SELECT 'OG-1' AS Original_Grain,
'PG-1' AS Pivot_Grain,
'First_Col' AS Pivot_Column,
'A' AS Pivot_Attribute,
TO_DATE('2020-01-01','YYYY-MM-DD') AS Add_Dts
FROM dual
UNION
SELECT 'OG-1' AS Original_Grain,
'PG-1' AS Pivot_Grain,
'First_Col' AS Pivot_Column,
'B' AS Pivot_Attribute,
TO_DATE('2020-01-02','YYYY-MM-DD')
FROM dual
UNION
SELECT 'OG-2' AS Original_Grain,
'PG-1' AS Pivot_Grain,
'Second_Col' AS Pivot_Column,
'A' AS Pivot_Attribute,
TO_DATE('2020-01-01','YYYY-MM-DD')
FROM dual
UNION
SELECT 'OG-3' AS Original_Grain,
'PG-1' AS Pivot_Grain,
'Third_Col' AS Pivot_Column,
'C' AS Pivot_Attribute,
TO_DATE('2020-01-02','YYYY-MM-DD')
FROM dual
UNION
SELECT 'OG-3' AS Original_Grain,
'PG-1' AS Pivot_Grain,
'Third_Col' AS Pivot_Column,
'B' AS Pivot_Attribute,
TO_DATE('2020-01-03','YYYY-MM-DD')
FROM dual
),
GET_NORMALIZED_RANGES AS
( SELECT I.*,
COALESCE(
LEAD(Add_Dts) OVER (
PARTITION BY I.Original_Grain
ORDER BY I.Add_Dts), TO_DATE('9000-01-01')
) AS Next_Add_Dts
FROM INPUT I
),
GET_DISTINCT_ADD_DATES AS
( SELECT DISTINCT Add_Dts AS Driving_Date
FROM Input
),
NORMALIZED_EFFECTIVE_AT_EACH_POINT AS
( SELECT GNR.*,
GDAD.Driving_Date
FROM GET_NORMALIZED_RANGES GNR
INNER
JOIN GET_DISTINCT_ADD_DATES GDAD
ON GDAD.driving_date >= GNR.add_dts
AND GDAD.driving_Date < GNR.next_add_dts
),
PIVOT_EACH_POINT AS
( SELECT DISTINCT
Pivot_Grain,
Driving_Date,
MAX("'First_Col'") OVER ( PARTITION BY Pivot_Grain, Driving_Date) AS First_Col,
MAX("'Second_Col'") OVER ( PARTITION BY Pivot_Grain, Driving_Date) AS Second_Col,
MAX("'Third_Col'") OVER ( PARTITION BY Pivot_Grain, Driving_Date) AS Third_Col
FROM NORMALIZED_EFFECTIVE_AT_EACH_POINT NEP
PIVOT (MAX(Pivot_Attribute) FOR PIVOT_COLUMN IN ('First_Col','Second_Col','Third_Col'))
)
SELECT Pivot_Grain,
Driving_Date AS From_Dt,
COALESCE(LEAD(Driving_Date) OVER ( PARTITION BY pivot_grain ORDER BY Driving_Date),TO_DATE('9999-01-01')) AS To_Dt,
First_Col,
Second_Col,
Third_Col
FROM PIVOT_EACH_POINT
因此可以使用 VALUES 运算符编写输入,并将列名放入 CTE 定义中,从而占用更少的空间。
WITH input(original_grain, pivot_grain, pivot_column, pivot_attribute, add_dts) AS (
SELECT * FROM VALUES
('OG-1', 'PG-1', 'First_Col', 'A', '2020-01-01'::date),
('OG-1', 'PG-1', 'First_Col', 'B', '2020-01-02'::date),
('OG-2', 'PG-1', 'Second_Col', 'A', '2020-01-01'::date),
('OG-3', 'PG-1', 'Third_Col', 'C', '2020-01-02'::date),
('OG-3', 'PG-1', 'Third_Col', 'B', '2020-01-03'::date)
)
可以通过使用默认值(隐式 COALESCE)来简化 LEAD,但有时如果这种类型的数据中有间隙,IGNORE NULLS OVER 是一个很棒的工具。
, get_normalized_ranges AS (
SELECT
*
,LEAD(add_dts,1,'9000-01-01'::date) OVER (PARTITION BY original_grain ORDER BY add_dts) AS next_add_dts
FROM input
)
get_distinct_add_dates 看起来不错。
, get_distinct_add_dates AS (
SELECT DISTINCT add_dts AS driving_date
FROM input
)
根据您的数据
normalized_effective_at_each_point
会这样做,并在每个时间/日期点给您一个值,这将分割不相关的值(我假设pivot_grain是一些全局事物id,它是不同的数据,因此这输入支持它)
('OG-1', 'PG-1', 'First_Col', 'A', '2020-01-01'::date),
('OG-1', 'PG-1', 'First_Col', 'B', '2020-01-03'::date),
('OG-2', 'PG-1', 'Second_Col','A', '2020-01-01'::date),
('OG-3', 'PG-1', 'Third_Col', 'C', '2020-01-03'::date),
('OG-3', 'PG-1', 'Third_Col', 'B', '2020-01-05'::date),
('OG-4', 'PG-2', 'First_Col', 'D', '2020-02-02'::date),
('OG-4', 'PG-2', 'First_Col', 'E', '2020-02-04'::date),
('OG-5', 'PG-2', 'Second_Col','D', '2020-02-02'::date),
('OG-6', 'PG-2', 'Third_Col', 'F', '2020-02-04'::date),
('OG-6', 'PG-2', 'Third_Col', 'D', '2020-02-06'::date)
此时
get_distinct_add_dates
应变为:
, get_distinct_add_dates AS (
SELECT DISTINCT pivot_grain, add_dts AS driving_date
FROM input
)
INNER JOIN 是一个 JOIN,所以我们可以跳过不需要的 INNER
, normalized_effective_at_each_point AS (
SELECT gnr.*,
gdad.driving_date
FROM get_normalized_ranges AS gnr
JOIN get_distinct_add_dates AS gdad
ON gnr.pivot_grain = gdad.pivot_grain
AND gdad.driving_date >= gnr.add_dts
AND gdad.driving_date < gnr.next_add_dts
),
真的
pivot_each_point
是一个三路JOIN,或者可以写一个GROUP BY,这是DISTINCT真正为我们做的,因此PIVOT消失了。
, pivot_each_point AS (
SELECT Pivot_Grain
,Driving_Date
,MAX(IFF(pivot_column='First_Col', Pivot_Attribute, NULL)) as first_col
,MAX(IFF(pivot_column='Second_Col', Pivot_Attribute, NULL)) as second_col
,MAX(IFF(pivot_column='Third_Col', Pivot_Attribute, NULL)) as third_col
FROM normalized_effective_at_each_point
GROUP BY 1,2
)
最后,最后的领先可以放弃 COALESCE 并移至
pivot_each_point
。
WITH input(original_grain, pivot_grain, pivot_column, pivot_attribute, add_dts) AS (
SELECT * FROM VALUES
('OG-1', 'PG-1', 'First_Col', 'A', '2020-01-01'::date),
('OG-1', 'PG-1', 'First_Col', 'B', '2020-01-03'::date),
('OG-2', 'PG-1', 'Second_Col','A', '2020-01-01'::date),
('OG-3', 'PG-1', 'Third_Col', 'C', '2020-01-03'::date),
('OG-3', 'PG-1', 'Third_Col', 'B', '2020-01-05'::date),
('OG-4', 'PG-2', 'First_Col', 'D', '2020-02-02'::date),
('OG-4', 'PG-2', 'First_Col', 'E', '2020-02-04'::date),
('OG-5', 'PG-2', 'Second_Col','D', '2020-02-02'::date),
('OG-6', 'PG-2', 'Third_Col', 'F', '2020-02-04'::date),
('OG-6', 'PG-2', 'Third_Col', 'D', '2020-02-06'::date)
), get_normalized_ranges AS (
SELECT
*
,LEAD(add_dts,1,'9000-01-01'::date) OVER (PARTITION BY original_grain ORDER BY add_dts) AS next_add_dts
FROM input
), get_distinct_add_dates AS (
SELECT DISTINCT pivot_grain, add_dts AS driving_date
FROM input
), normalized_effective_at_each_point AS (
SELECT gnr.*,
gdad.driving_date
FROM get_normalized_ranges AS gnr
JOIN get_distinct_add_dates AS gdad
ON gnr.pivot_grain = gdad.pivot_grain
AND gdad.driving_date >= gnr.add_dts
AND gdad.driving_date < gnr.next_add_dts
)
SELECT pivot_grain
,driving_date
,LEAD(driving_date, 1, '9999-01-01'::date) OVER (PARTITION BY pivot_grain ORDER BY driving_date) AS to_dt
,MAX(IFF(pivot_column = 'First_Col', pivot_attribute, NULL)) AS first_col
,MAX(IFF(pivot_column = 'Second_Col', pivot_attribute, NULL)) AS second_col
,MAX(IFF(pivot_column = 'Third_Col', pivot_attribute, NULL)) AS third_col
FROM normalized_effective_at_each_point
GROUP BY pivot_grain, driving_date
ORDER BY pivot_grain, driving_date;
给出结果:
PIVOT_GRAIN DRIVING_DATE TO_DT FIRST_COL SECOND_COL THIRD_COL
PG-1 2020-01-01 2020-01-03 A A null
PG-1 2020-01-03 2020-01-05 B A C
PG-1 2020-01-05 9999-01-01 B A B
PG-2 2020-02-02 2020-02-04 D D null
PG-2 2020-02-04 2020-02-06 E D F
PG-2 2020-02-06 9999-01-01 E D D
我忍不住认为我已经将处理数据的方式过度映射到您的 PIVOT_GRAIN 上。既然我理解了代码,我就开始尝试从第一原理再次解决这个问题,我认为前三个处理 CTE 就是我将如何做到的,因此 GROUP BY 也是我将如何做其余的,许多 JOIN 似乎真的恶心,对于 Snowflake,我更喜欢这种爆炸数据,然后合并(或分组依据)数据,因为它都很好并且可并行化。