当我试图运行此代码来预处理文本时,我得到下面的错误,有人遇到类似的问题,但帖子没有足够的细节。
我将所有内容都放在上下文中,希望能帮助审稿人更好地帮助我们。
这是功能;
def preprocessing(text):
#text=text.decode("utf8")
#tokenize into words
tokens=[word for sent in nltk.sent_tokenize(text) for word in
nltk.word_tokenize(sent)]
#remove stopwords
stop=stopwords.words('english')
tokens=[token for token in tokens if token not in stop]
#remove words less than three letters
tokens=[word for word in tokens if len(word)>=3]
#lower capitalization
tokens=[word.lower() for word in tokens]
#lemmatization
lmtzr=WordNetLemmatizer()
tokens=[lmtzr.lemmatize(word for word in tokens)]
preprocessed_text=' '.join(tokens)
return preprocessed_text
#open the text data from disk location
sms=open('C:/Users/Ray/Documents/BSU/Machine_learning/Natural_language_Processing_Pyhton_And_NLTK_Chap6/smsspamcollection/SMSSpamCollection')
sms_data=[]
sms_labels=[]
csv_reader=csv.reader(sms,delimiter='\t')
for line in csv_reader:
#adding the sms_id
sms_labels.append(line[0])
#adding the cleaned text by calling the preprocessing method
sms_data.append(preprocessing(line[1]))
sms.close()
结果;
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-38-b42d443adaa6> in <module>()
8 sms_labels.append(line[0])
9 #adding the cleaned text by calling the preprocessing method
---> 10 sms_data.append(preprocessing(line[1]))
11 sms.close()
<ipython-input-37-69ef4cd83745> in preprocessing(text)
12 #lemmatization
13 lmtzr=WordNetLemmatizer()
---> 14 tokens=[lmtzr.lemmatize(word for word in tokens)]
15 preprocessed_text=' '.join(tokens)
16 return preprocessed_text
~\Anaconda3\lib\site-packages\nltk\stem\wordnet.py in lemmatize(self, word, pos)
38
39 def lemmatize(self, word, pos=NOUN):
---> 40 lemmas = wordnet._morphy(word, pos)
41 return min(lemmas, key=len) if lemmas else word
42
~\Anaconda3\lib\site-packages\nltk\corpus\reader\wordnet.py in
_morphy(self, form, pos, check_exceptions) 1798 1799 # 1. Apply rules once to the input to get y1, y2, y3, etc.
-> 1800 forms = apply_rules([form]) 1801 1802 # 2. Return all that are in the database (and check the original too)
~\Anaconda3\lib\site-packages\nltk\corpus\reader\wordnet.py in apply_rules(forms) 1777 def apply_rules(forms): 1778 return [form[:-len(old)] + new
-> 1779 for form in forms 1780 for old, new in substitutions 1781 if form.endswith(old)]
~\Anaconda3\lib\site-packages\nltk\corpus\reader\wordnet.py in <listcomp>(.0) 1779 for form in forms 1780 for old, new in substitutions
-> 1781 if form.endswith(old)] 1782 1783 def filter_forms(forms):
AttributeError: 'generator' object has no attribute 'endswith'
我相信错误来自nltk.corpus.reader.wordnet的源代码
整个源代码可以在nltk文档页面中看到。在这里发帖太久了;但下面是原始的link:
谢谢你的帮助。
错误消息和回溯指向您的问题来源:
in preprocessing(text)12 #lemmatization 13 lmtzr = WordNetLemmatizer()---> 14 tokens = [lmtzr.lemmatize(在令牌中逐字逐句)] 15 preprocessed_text =''。join(tokens)16 return preprocessed_text
〜\ Anaconda3 \ lib \ site-packages \ nltk \ stem \ wordnet.py in lemmatize(self,word,pos)38 39 def lemmatize(self,word,pos = NOUN):
显然,从函数的签名(word
,而不是words
)和错误(“没有属性'结束'” - endswith()
实际上是一个str
方法),lemmatize()
期望一个单词,但在这里:
tokens=[lmtzr.lemmatize(word for word in tokens)]
你正在传递一个生成器表达式。
你想要的是:
tokens = [lmtzr.lemmatize(word) for word in tokens]
NB:你提到:
我相信错误来自nltk.corpus.reader.wordnet的源代码
确实在这个包中引发了错误,但它“来自”(在“由...造成”的意义上)你的代码传递了错误的参数;)
希望这会帮助你下次自己调试这类问题。