在数据框架中使用RegexpTokenizer拆分句子 [重复] 。

问题描述 投票:1回答:1

我试图将数据框输入到我的文字处理器中,先分割成句子,再分割成单词。

一个示例文本。

When the blow was repeated,together with an admonition in
childish sentences, he turned over upon his back, and held his paws in a peculiar manner.

1) This a numbered sentence
2) This is the second numbered sentence

At the same time with his ears and his eyes he offered a small prayer to the child.

Below are the examples
- This an example of bullet point sentence
- This is also an example of bullet point sentence

需要输出


[
['When', 'the', 'blow', 'was', 'repeated', ',', 'together', 'with', 'an', 'admonition', 'in', 'childish', 'sentences', ',', 'he', 'turned', 'over', 'upon', 'his', 'back', ',', 'and', 'held', 'his', 'paws', 'in', 'a', 'peculiar', 'manner', '.'], 
['1', ')', 'This', 'a', 'numbered', 'sentence']
['2', ')', 'This', 'is', 'the', 'second', 'numbered', 'sentence']
['At', 'the', 'same', 'time', 'with', 'his', 'ears', 'and', 'his', 'eyes', 'he', 'offered', 'a', 'small', 'prayer', 'to', 'the', 'child', '.']
['Below', 'are', 'the', 'examples']
['-', 'This', 'an', 'example', 'of', 'bullet', 'point', 'sentence']
['-', 'This', 'also','an', 'example', 'of', 'bullet', 'point', 'sentence']
]

我目前尝试的代码

from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'[^d\)\-\*?!]+')

df["Regexp"] = data[comments].apply(tokenizer.tokenize)

python pandas dataframe nltk tokenize
1个回答
1
投票

这可以是一个解决方案,你可以根据你的数据来定制它。

text = """When the blow was repeated,together with an admonition in
childish sentences, he turned over upon his back, and held his paws in a peculiar manner.

1) This a numbered sentence
2) This is the second numbered sentence

At the same time with his ears and his eyes he offered a small prayer to the child.

Below are the examples
- This an example of bullet point sentence
- This is also an example of bullet point sentence"""



import re
import nltk

sentences = nltk.sent_tokenize(text)
results = []

for sent in sentences:
    sent = re.sub(r'(\n)(-|[0-9])', r"\1\n\2", sent)
    sent = sent.split('\n\n')
    for s in sent:
        results.append(nltk.word_tokenize(s))

results

[
['When', 'the', 'blow', 'was', 'repeated', ',', 'together', 'with', 'an', 'admonition', 'in', 'childish', 'sentences', ',', 'he', 'turned', 'over', 'upon', 'his', 'back', ',', 'and', 'held', 'his', 'paws', 'in', 'a', 'peculiar', 'manner', '.'], 
['1', ')', 'This', 'a', 'numbered', 'sentence']
['2', ')', 'This', 'is', 'the', 'second', 'numbered', 'sentence']
['At', 'the', 'same', 'time', 'with', 'his', 'ears', 'and', 'his', 'eyes', 'he', 'offered', 'a', 'small', 'prayer', 'to', 'the', 'child', '.']
['Below', 'are', 'the', 'examples']
['-', 'This', 'an', 'example', 'of', 'bullet', 'point', 'sentence']
['-', 'This', 'also','an', 'example', 'of', 'bullet', 'point', 'sentence']
]
© www.soinside.com 2019 - 2024. All rights reserved.