在我的项目中,我结合了三个独特的输入源来生成一个分数。想象一下这个公式
Integrated score = weight_1 * Score_1 + weight_2 * Score_2 + weight_3 * Score_3
所以,为此,我使用了以下代码
DATA w_matrix_t;
/*Create a row count to identify the model weight combination*/
RETAIN model_combination;
model_combination = 0;
DO n_1 = 0 TO 100 BY 1;
DO n_2 = 0 TO 100 BY 1;
IF (100 - n_1 - n_2) ge 0 AND (100 - n_1 - n_2) le 100 THEN DO;
n_3 = 100 - n_1 - n_2;
model_combination+1;
output;
END;
END;
END;
RUN;
DATA w_matrix;
SET w_matrix_t;
w_1 = n_1/100;
w_2 = n_2/100;
w_3 = n_3/100;
/*Drop the old variables*/
DROP n_1 n_2 n_3;
RUN;
PROC SQL;
CREATE TABLE weights_added AS
SELECT
w.model_combination
, w.w_1
, w.w_2
, w.w_3
, fit.name
, fit.logsalary
, (
w.w_1*fit.crhits +
w.w_2*fit.natbat +
w.w_3*fit.nbb
) AS y_hat_int
FROM
work.w_matrix AS w
CROSS JOIN
sashelp.baseball AS fit
ORDER BY
model_combination;
QUIT;
我的问题是,是否有更有效的方式进行此加入?目的是创建一个大表,其中包含为所有权重组合复制的整个sashelp.baseball数据集。
在我的实时数据中,我有三个输入源,每个输入源有46,000个观察点,并且交叉连接需要1个小时。我还有三个输入源,每个465,000,我想这将需要很长时间。
我这样做的原因是因为我使用Proc freq和组处理(通过模型组合)计算我的Somers'D
500,000行表的5000份副本将是一个相当大的表,有2.5B行
以下是数据步骤堆叠的示例;每行have
的weights
数据集的一个副本。该示例以SET weights
为特征,在显式循环(内循环)内处理每个权重(通过隐式循环)和SET have POINT=
/ OUTPUT
。内循环在计算加权和时复制数据。
data have;
set sashelp.baseball (obs=200); * keep it small for demonstration;
run;
data weights (keep=comboId w1 w2 w3);
do i = 0 to 100; do j = 0 to 100; if (i+j) <= 100 then do;
comboId + 1;
w1 = i / 100;
w2 = j / 100;
w3 = (100 - i - j) / 100;
output;
end; end; end;
run;
data want (keep=comboid w1-w3 name logsalary y_hat_int);
do while (not endOfWeights);
set weights end = endOfWeights;
do row = 1 to RowsInHave;
set have (keep=name logsalary crhits natbat nbb) nobs = RowsInHave point = row;
y_hat_int = w1 * crhits + w2 * natbat + w3 * nbb;
output;
end;
end;
stop;
run;
proc freq data=want noprint;
by comboId;
table y_hat_int / out=freqout ;
format y_hat_int 4.;
run;
proc contents data=want;
run;
在袖口上,一个包含5,151份来自棒球的200行提取物的表名义上为72.7MB,因此期望5151份465K行表具有~2.4G行并且是~170GB磁盘。在旋转@ 7200的磁盘上实现最佳性能,在最好的20分钟内写作,可能还有更多。