获得nltk k的惯性意味着使用余弦相似度进行聚类

问题描述 投票:0回答:1

我已经使用nltk进行k个均值聚类,因为我想更改距离度量。 nltk k表示的惯性是否类似于sklearn?似乎在他们的文档或在线中找不到...

下面的代码是人们通常使用sklearn k表示法求惯性的方式。

inertia = []
for n_clusters in range(2, 26, 1):
  clusterer = KMeans(n_clusters=n_clusters)
  preds = clusterer.fit_predict(features)
  centers = clusterer.cluster_centers_
  inertia.append(clusterer.inertia_)

plt.plot([i for i in range(2,26,1)], inertia, 'bx-')
plt.xlabel('k')
plt.ylabel('Sum_of_squared_distances')
plt.title('Elbow Method For Optimal k')
plt.show()
python nltk k-means
1个回答
0
投票

您可以编写自己的函数来获取nltk中Kmeanscluster的惯性。

根据您发布的问题,How do I obtain individual centroids of K mean cluster using nltk (python)。使用相同的伪数据,如下所示。制成2个簇之后enter image description here

参考文档https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html,惯性是样本到其最近的簇中心的平方距离的平方和。

 feature_matrix = df[['feature1','feature2','feature3']].to_numpy()
 centroid = df['centroid'].to_numpy()

 def nltk_inertia(feature_matrix, centroid):
     sum_ = []
     for i in range(feature_matrix.shape[0]):
         sum_.append(np.sum((feature_matrix[i] - centroid[i])**2))  #here implementing inertia as given in the docs of scikit i.e sum of squared distance..

     return sum(sum_)

 nltk_inertia(feature_matrix, centroid)
 #op 27.495250000000002

 #now using kmeans clustering for feature1, feature2, and feature 3 with same number of cluster 2

scikit_kmeans = KMeans(n_clusters= 2)
scikit_kmeans.fit(vectors)  # vectors = [np.array(f) for f in df.values]  which contain feature1, feature2, feature3
scikit_kmeans.inertia_
#op
27.495250000000006
© www.soinside.com 2019 - 2024. All rights reserved.