我目前正在创建一个程序,可以在一组文本文档(+5000个文档)中计算近似重复的分数。我正在使用Simhash生成文档的uniq足迹(感谢这个github repo)
我的数据是:
data = {
1: u'Im testing simhash algorithm.',
2: u'test of simhash algorithm',
3: u'This is simhash test.',
}
这给了我这样的3个哈希:
00100110101110100011111000100010010101011001000001110000111001011100110101001101111010100010001011001011000110000100110101100110
00001001110010000000011000001000110010001010000101010000001100000100100011100100110010100000010000000110001001010110000010000100
10001110101100000100101010000010010001011010001000000000101000101100001100100000110011000000011001000000000110000000100110000000
而现在,如何比较这三个哈希?我知道我必须将它们分成块但没有确切的方法吗?
我想要做的是输出所有重复文档(> 70%)及其ID和重复文档的ID。
有人可以帮忙吗?
在回答您的问题之前,请务必记住:
现在,我已回复你关于Github问题的问题,你提出了here。
作为参考,这里有一些示例代码,您可以使用它们在散列后打印最终的近似重复文档。
# assuming that you have a dictionary with document id as the key and the document as the value:
# documents = { doc_id: doc } you can do:
from simhash import simhash
def split_hash(str, num):
return [ str[start:start+num] for start in range(0, len(str), num) ]
hashes = {}
for doc_id, doc in documents.items():
hash = simhash(doc)
# you can either use the whole hash for higher precision or split into chunks for higher recall
hash_chunks = split_hash(hash, 4)
for chunk in hash_chunks:
if chunk not in hashes:
hashes[chunk] = []
hashes[chunk].append(doc_id)
# now you can print the duplicate documents:
for hash, doc_list in hashes:
if doc_list > 1:
print("Duplicates documents: ", doc_list)
如果不清楚,请告诉我。