K均值结果索引在第二轮中有所不同

问题描述 投票:1回答:1

我正在对某些统计数据运行K-Means。我的矩阵大小为[192x31634]。K-Means表现出色,并创建了7个质心,这是我想要的。所以我的结果是[192x7]

作为一些自我检查,我将在K均值中获得的索引值存储到字典中。

    centroids,idx = runkMeans(X_train, initial_centroids, max_iters)
    resultDict.update({'centroid' : centroids})
    resultDict.update({'idx' : idx})

然后,我使用用于查找质心的相同数据测试我的K均值。奇怪的是,我的结果有所不同:

    dict= pickle.load(open("MyDictionary.p", "rb"))         
    currentIdx = findClosestCentroids(X_train, dict['centroid'])
    print("idx Differs: ",np.count_nonzero(currentIdx != dict['idx']))

输出:

idx差异:189

有人可以向我解释这种差异吗?我将算法的最大迭代次数设置为50,这似乎太多了。 @Joe Halliwell指出,K-Means是不确定的。 findClosestCentroids由runkMeans调用。我看不到,为什么两个idx的结果可以不同。感谢您的任何想法。

这是我的代码:

    def findClosestCentroids(X, centroids):
        K = centroids.shape[0]
        m = X.shape[0]
        dist = np.zeros((K,1))
        idx = np.zeros((m,1), dtype=int)
        #number of columns defines my number of data points
        for i in range(m):
            #Every column is one data point
            x = X[i,:]
            #number of rows defines my number of centroids
            for j in range(K):
                #Every row is one centroid
                c = centroids[j,:]
                #distance of the two points c and x
                dist[j] = np.linalg.norm(c-x)
                #if last centroid is processed
                if (j == K-1):
                    #the Result idx is set with the index of the centroid with minimal distance
                    idx[i] = np.argmin(dist)
        return idx

    def runkMeans(X, initial_centroids, max_iters):
        #Initialize values
        m,n = X.shape
        K = initial_centroids.shape[0]
        centroids = initial_centroids
        previous_centroids = centroids
        for i in range(max_iters):
            print("K_Means iteration:",i)
            #For each example in X, assign it to the closest centroid
            idx = findClosestCentroids(X, centroids)
            #Given the memberships, compute new centroids
            centroids = computeCentroids(X, idx, K)
        return centroids,idx
python k-means unsupervised-learning
1个回答
1
投票

K-means是非确定性算法。通常通过设置随机种子来控制这一点。例如,SciKit Learn的实现为此提供了random_state参数:

from sklearn.cluster import KMeans
import numpy as np
X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)

请参阅https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html中的文档

© www.soinside.com 2019 - 2024. All rights reserved.