此问题是关于从质心数目未知的数据集中创建K最近邻图[KNNG](这与K-means聚类不同)。
假设您有一个观测数据集,存储在一个数据矩阵X[n_samples, n_features]
中,每行是一个观测值或特征向量,每列是一个特征。现在假设您要使用k-Neighbors graph计算X中点的(加权)sklearn.neighbors.kneighbors_graph。
选择每个样本要使用的邻居数量的基本方法是什么?当您进行大量观察时,哪种算法可以很好地缩放?
我在下面看到了这种蛮力方法,但是当样本数据集的大小变大并且您必须为n_neighbors_max
选择一个合适的起始上限时,这种方法效果不佳。这个算法有名字吗?
def autoselect_K(X, n_neighbors_max, threshold):
# get the pairwise euclidean distance between every observation
D = sklearn.metrics.pairwise.euclidean_distances(X, X)
chosen_k = n_neighbors_max
for k in range(2, n_neighbors_max):
k_avg = []
# loop over each row in the distance matrix
for row in D:
# sort the row from smallest distance to largest distance
sorted_row = numpy.sort(row)
# calculate the mean of the smallest k+1 distances
k_avg.append(numpy.mean(sorted_row[0:k]))
# find the median of the averages
kmedian_dist = numpy.median(k_avg)
if kmedian_dist >= threshold:
chosen_k = k
break
# return the number of nearest neighbors to use
return chosen_k
也许您正在寻找的是NNClassifier。这里https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html