我想优化一段代码,以帮助我为给定数据集的10万行中的每个项目计算最近的邻居。该数据集包含50个可变列,这有助于描述每个行项目,并且大多数单元格包含介于0-1之间的概率值。]
问题:
我对python还是很陌生,但想知道是否有更高级的用户可以在下面的代码中推荐任何更好的结构,这将有助于我加快计算速度。当前,该程序需要很长时间才能完成。提前致谢!import math
import numpy as np
import pandas as pd
from scipy.spatial import distance
from sklearn.neighbors import KNeighborsRegressor
df_set = pd.read_excel('input.xlsx', skiprows=0)
distance_columns = ["var_1",
......,
......,
......,
"var_50"]
def euclidean_distance(row):
inner_value = 0
for k in distance_columns:
inner_value += (row[k] - selected_row[k]) ** 2
return math.sqrt(inner_value)
knn_name_list = []
for i in range(len(df_set.index)):
numeric = df_set[distance_columns]
normalized = (numeric - numeric.mean()) / numeric.std()
normalized.fillna(0, inplace=True)
selected_normalized = normalized[df_set["Filename"] == df_set["Filename"][i]]
euclidean_distances = normalized.apply(lambda row: distance.euclidean(row, selected_normalized), axis=1)
distance_frame = pd.DataFrame(data={"dist": euclidean_distances, "idx": euclidean_distances.index})
distance_frame.sort_values("dist", inplace=True)
second_smallest = distance_frame.iloc[1]["idx"]
most_similar_to_selected = df_set.loc[int(second_smallest)]["Filename"]
knn_name_list.append(most_similar_to_selected)
print(knn_name_list)
df_set['top_neighbor'] = np.array(knn_name_list)
df_set.to_csv('output.csv', encoding='utf-8', sep=',', index=False)
我想优化一段代码,以帮助我为给定数据集的10万行中的每个项目计算最近的邻居。数据集包含50个可变列,有助于描述...
我建议使用NearestNeighbors。 (将n_jobs设置为-1以使用所有处理器)
为您提供@Amine方法的另一个想法,您还可以在其中加入PCA Transformation
(https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)。