将大文件与左旋麻黄素分成3组

问题描述 投票:1回答:1

嗨,我有一个小文件和一个大文件,此处的代码甚至不适用于大文件,仅适用于小文件,那么如何读取大文件并对其执行操作?当我阅读并尝试在一个循环中进行聚类时,它不起作用,因为每次迭代都仅在线。这是小文件的问题:线文件,我需要将它们分为3组。我已经尝试了相似性传播,但是它没有获取组大小参数,它给了我4个组,而第4个组只有一个单词,该单词与另一组非常接近:

0
 - *Bras5emax Estates, L.T.D.
:* Bras5emax Estates, L.T.D.

1
 - *BOZEMAN Enterprises
:* BBAZEMAX ESTATES, LTD
, BOZEMAN Ent.
, BOZEMAN Enterprises
, BOZERMAN ENTERPRISES
, BRAZEMAX ESTATYS, LTD
, Bozeman Enterprises

2
 - *PC Adelman
:* John Smith
, Michele LTD
, Nadelman, Jr
, PC Adelman

3
 - *Gramkai, Inc.
:* Gramkai Books
, Gramkai, Inc.
, Gramkat Estates, Inc., Gramkat, Inc.

然后我尝试了K-MEANS,但结果是:

0
 - *Gramkai Books
, Gramkai, Inc.
, Gramkat Estates, Inc., Gramkat, Inc.
:*
1
 - *BBAZEMAX ESTATES, LTD
, BOZEMAN Enterprises
, BOZERMAN ENTERPRISES
, BRAZEMAX ESTATYS, LTD
, Bozeman Enterprises
, Bras5emax Estates, L.T.D.
:*
2
 - *BOZEMAN Ent.
, John Smith
, Michele LTD
, Nadelman, Jr
, PC Adelman
:*

您可以看到BOZEMAN Ent。在组2中,而不是组1中。

我的问题是:有没有办法做一个更好的腰带?并且在K-MEANS中有一个cluster_center吗?

代码:

import numpy as np
import sklearn.cluster
import distance

f = open("names.txt", "r")
words = f.readlines()
words = np.asarray(words) #So that indexing with a list will work
lev_similarity = -1*np.array([[distance.levenshtein(w1,w2) for w1 in words] for w2 in words])
affprop = sklearn.cluster.KMeans(n_clusters=3)
affprop.fit(lev_similarity)
for cluster_id in np.unique(affprop.labels_):
    print(cluster_id)
    cluster = np.unique(words[np.nonzero(affprop.labels_==cluster_id)])
    cluster_str = ", ".join(cluster)
    print(" - *%s:*" % ( cluster_str))
python file machine-learning cluster-computing k-means
1个回答
0
投票

可以通过多种方式来改善给定文本名称(企业)的聚类。

  1. [介绍一些文本清理和领域知识,例如删除点,常见的企业停用词和降低字符:
words = [re.sub(r"(,|\.|ltd|l\.t\.d|inc|estates|enterprises|ent|estatys)","", w.lower()).strip() for w in words]
  1. 使用distance.levenshtein的“归一化”版本,以便可以有意义地比较距离,例如:
distance.nlevenshtein("abc", "acd", method=1)  # shortest alignment
distance.nlevenshtein("abc", "acd", method=2)  # longest alignment
  1. 尝试其他距离的度量:sorensenjaccard已经标准化。

下面完整的代码示例:

words = \
["Gramkai Books",
"Gramkai, Inc.",
"Gramkat Estates, Inc.",
"Gramkat, Inc.",
"BBAZEMAX ESTATES, LTD",
"BOZEMAN Enterprises",
"BOZERMAN ENTERPRISES",
"BRAZEMAX ESTATYS, LTD",
"Bozeman Enterprises",
"Bras5emax Estates, L.T.D.",
"BOZEMAN Ent.",
"John Smith",
"Michele LTD",
"Nadelman, Jr",
"PC Adelman"]

import re
import sklearn
from sklearn import cluster
words = [re.sub(r"(,|\.|ltd|l\.t\.d|inc|estates|enterprises|ent|estatys)","", w.lower()).strip() for w in words]
words = np.asarray(words) #So that indexing with a list will work
lev_similarity = -1*np.array([[distance.nlevenshtein(w1,w2,method = 1) for w1 in words] for w2 in words])
affprop = sklearn.cluster.KMeans(n_clusters=3)
affprop.fit(lev_similarity)
for cluster_id in np.unique(affprop.labels_):
    print(cluster_id)
    cluster = np.unique(words[np.nonzero(affprop.labels_==cluster_id)])
    cluster_str = ", ".join(cluster)
    print(" - *%s:*" % ( cluster_str))

结果:

0
 - *john smith, michele, nadelman jr, pc adelman:*
1
 - *bbazemax, bozeman, bozerman, bras5emax, brazemax:*
2
 - *gramkai, gramkai books, gramkat:*

最后,您可能需要将更改的名称与原始名称连接起来。

© www.soinside.com 2019 - 2024. All rights reserved.