ValueError:matmul:输入操作数 1 的核心维度 0 不匹配,gufunc 签名为 (n?,k),(k,m?)->(n?,m?)(大小 5 不同于3)

问题描述 投票:0回答:1
import numpy as np
from numpy.linalg import inv
from scipy.linalg import pinv

# Define necessary functions
def create_laplacian_from_adjacency(adj_matrix):
    degree_matrix = np.diag(adj_matrix.sum(axis=1))

    laplacian_matrix = degree_matrix - adj_matrix
    
    return laplacian_matrix

def dirichlet_energy(L, X):
    """Dirichlet energy for smoothness quantification."""
    return np.trace(X.T @ L @ X)

def update_C(X, X_tilde, L, C, gamma, alpha, lam, J):
    """Updating C using gradient descent with a majorized function approximation."""
    p, k = C.shape
    C_old = np.copy(C)
    
    gradient_f = (-2 * gamma * L @ C_old @ inv(C_old.T @ L @ C_old + J) +
                  alpha * (C_old @ X_tilde - X) @ X_tilde.T +
                  2 * L @ C_old @ X_tilde @ X_tilde.T + lam * C_old @ np.ones((k, k)))
    
    # Majorized function optimization step (simplified approach)
    t = 0.01  # Learning rate, needs tuning based on actual application
    C_new = pinv(C_old - t * gradient_f)
    C_new = np.maximum(C_new, 0)  # Enforce non-negativity
    return C_new

def update_X(X, L, C, alpha):
    """Update X(tilda) based on the updated C."""
    inv_matrix = inv((2/alpha) * (C.T @ L @ C)) + (C.T @ C)
    X_tilde_new = inv_matrix @ C.T @ X
    return X_tilde_new

def fgc_algorithm(X, L, alpha, gamma, lam, iterations=5):
    """Perform the Featured Graph Coarsening algorithm."""
    p, n = X.shape
    k = 3  # Assume a reduced dimension of 3 for coarsening
    C = np.random.rand(p, k)*0.1
    J = np.full((k, k), 1/k)
    X_tilde = pinv(C)@X

    for i in range(iterations):
        C = update_C(X, X_tilde, L, C, gamma, alpha, lam, J)
        X_tilde = update_X(X, L, C, alpha)
        current_energy = dirichlet_energy(L, X_tilde)
        print(f"Iteration {i}: Dirichlet Energy = {current_energy}")

    L_c = C.T @ L @ C
    return C, L_c, X_tilde

# Example:
X = np.random.rand(5, 7)  # Random features for 10 nodes
adj_matrix = np.array([
    [0, 1, 0, 0, 0],
    [1, 0, 1, 1, 1],
    [0, 1, 0, 1, 0],
    [0, 1, 1, 0, 1],
    [1, 1, 0, 1, 0]
])
L = create_laplacian_from_adjacency(adj_matrix)  # Create a sample Laplacian matrix
alpha, gamma, lam = 0.1, 1, 0.5  # Regularization parameters

C, L_c, X_tilde = fgc_algorithm(X, L, alpha, gamma, lam)
print("Updated C matrix:\n", C)
print("The L_C matrix:\n", L_c)
print("Updated feature matrix X(tilda):\n", X_tilde)

我正在尝试实现特色粗化图算法,但每次代码到达第 34 行时: inv_matrix = inv((2/alpha) * (C.T @ L @ C)) + (C.T @ C), 出现了上述错误。

我已经尝试检查所有内容,但根据我的说法,矩阵的尺寸是正确的,所以我不太明白为什么会出现这个问题?

如果你碰巧明白这一点,请帮忙。

numpy machine-learning graph matrix-multiplication
1个回答
0
投票

我不太熟悉该算法,但问题出在

update_C(...)
,因为它将这一行上的 C 维度从 (5,3) 更改为 (3,5):

C = update_C(X, X_tilde, L, C, gamma, alpha, lam, J)

更准确地说,当执行

C_new = pinv(C_old - t * gradient_f)
时,根据定义,逆函数将切换维度。

一种猜测是您可能会错过

update_C(...)
中的转置。我没有详细研究它,但假设您尝试使用本文中的等式13更新C,似乎C的转置应该在逆计算和梯度计算中使用。

Equation 13 in ][2]

© www.soinside.com 2019 - 2024. All rights reserved.