如何在 PyMC (5.10.0) 中参数化模型中每个自变量的滞后参数?

问题描述 投票:0回答:1

我是 PyMC 新手,我使用的是 5.10.0 版本。我正在运行一个简单的媒体混合模型,我需要参数化独立(媒体活动)变量和因变量(销量)之间的单独滞后。参数化这些会导致以下错误

Cell In[15], line 75
     72 # Creating a tensor to store transformed values
     73 adstocked = tt.zeros_like(x)
---> 75 for i in range(max_lag, x.shape[0]):
     76     weights = tt.power(rate, tt.arange(max_lag + 1))
     77     adstocked = tt.set_subtensor(adstocked[i], tt.dot(x[i-max_lag:i+1][::-1], weights))

TypeError: 'TensorVariable' object cannot be interpreted as an integer

可重现的示例如下。

## Create a simple MMM data 
import pandas as pd
from random import randint
import numpy as np
import pytensor.tensor as tt
import pytensor as pt
import pymc as pm
import pymc.sampling.jax as pmjax
import arviz as az


# Generate date range
dates = pd.date_range(start="2021-01-01", end="2022-01-01")

data = {
    "date": dates,
    "gcm_direct_Impressions": [randint(10000, 20000) for _ in dates],
    "display_direct_Impressions" :[randint(100000,150000) for _ in dates],
    "tv_grps": [randint(30, 50) for _ in dates],
    "tiktok_direct_Impressions": [randint(10000, 15000) for _ in dates],
    "sell_out_quantity": [randint(150, 250) for _ in dates]
}
df = pd.DataFrame(data)
m = max(df['sell_out_quantity'].values)

print(f"Max sales Volume {m}")

channel_columns = [col for col in df.columns if 'Impressions' in col or 'grps' in col]

transform_variables = channel_columns


delay_channels = channel_columns

media_channels = channel_columns

target = 'sell_out_quantity'

### Transform each channel variable

data_transformed = df.copy()

numerical_encoder_dict = {}


for feature in transform_variables:
    # Extracting the original values of the feature.
    original = df[feature].values

    # Calculating the maximum value of the feature.
    max_value = original.max()

    # Dividing each value in the feature by the maximum value.
    transformed = original / max_value

    # Storing the transformed data back into the 'data_transformed' DataFrame.
    data_transformed[feature] = transformed

    # Storing the maximum value used for scaling in the dictionary.
    # This will be used for reversing the transformation if needed.
    numerical_encoder_dict[feature] = max_value



def adstock_transform(x, rate,max_lag):
    """ Apply adstock transformation with PyTensor.
    :param x: PyTensor tensor, original data for the channel
    :param rate: PyTensor tensor, decay rate of the adstock transformation
    :param max_lag: int, maximum lag to consider for the adstock effect
    :return: PyTensor tensor, transformed data
    """
    # Creating a tensor to store transformed values
    adstocked = tt.zeros_like(x)
    
    for i in range(max_lag, x.shape[0]):
        weights = tt.power(rate, tt.arange(max_lag + 1))
        adstocked = tt.set_subtensor(adstocked[i], tt.dot(x[i-max_lag:i+1][::-1], weights))
    
    return adstocked

### Create a model
response_mean = []

with pm.Model() as model_2:
    # Looping through each channel in the list of delay channels.
    for channel_name in delay_channels:
        print(f"Delay Channels: Adding {channel_name}")

        # Extracting the transformed data for the current channel.
        x = data_transformed[channel_name].values

        # Defining Bayesian priors for the adstock, gamma, and alpha parameters for the current channel.
        adstock_param = pm.Beta(f"{channel_name}_adstock", 2, 2)
        saturation_gamma = pm.Beta(f"{channel_name}_gamma", 2, 2)
        saturation_alpha = pm.Gamma(f"{channel_name}_alpha", 3, 1)
        rate = pm.Beta(f'{channel_name}_rate', alpha=1, beta=1)
        lag = pm.DiscreteUniform(f"{channel_name}_lag",lower=0,upper=17)
        
        transformed_X1 = adstock_transform(x,rate,max_lag=lag)
        transformed_X2 = tt.zeros_like(x)
        for i in range(1,len(x)):
            transformed_X2 = tt.set_subtensor(transformed_X2[i],(transformed_X1[i]**saturation_alpha)/(transformed_X1[i]**saturation_alpha+saturation_gamma**saturation_alpha))
        channel_b = pm.HalfNormal(f"{channel_name}_media_coef", sigma = m)
        response_mean.append(transformed_X2 * channel_b)

    intercept = pm.Normal("intercept",mu = np.mean(data_transformed[target].values), sigma = 3)
    sigma = pm.HalfNormal("sigma", 4)
    likelihood = pm.Normal("outcome", mu = intercept + sum(response_mean), sigma = sigma,
                           observed = data_transformed[target].values)

with model_2:
    trace = pmjax.sample_numpyro_nuts(1000, tune=1000, target_accept=0.95)
    
    trace_summary = az.summary(trace)

我尝试了以下方法

  • max_lag
    函数中的
    int(max_lag.eval()
    替换为
    adstock_transform
    )。
  • x = data_transformed[channel_name].values
    替换为
    x = pm.ConstantData("data_{channel_name",data_transformed[channel_name].values)

这些都不起作用。我应该更改所使用的采样器类型吗?离散均匀分布是否有不同的实现可以帮助解决这些类型的问题?非常感谢您的帮助!

bayesian sampling pymc uniform-distribution
1个回答
0
投票

我找到了一个简单的解决方法。

当使用

pmjax.sample_numpyro_nuts()
进行概率建模时,需要注意的是,该函数是为连续分布而设计的,不直接支持离散分布。解决此限制的一种方法是用
discreteUniform
分布替换
Uniform
分布。然而,这种替换会产生连续输出,这可能不适合像
adstock_transform
这样需要离散输入的函数。为了解决这个问题,您可以使用
max_lag = int(max_lag.eval())
将连续输出转换回离散值。此行有效地将浮点数转换为整数,使其与
adstock_transform
函数兼容。此答案欢迎反馈。

© www.soinside.com 2019 - 2024. All rights reserved.