如何使用Spacy按句子分解文档

问题描述 投票:0回答:6

如何将文档(例如段落、书籍等)分成句子。

例如,将

"The dog ran. The cat jumped"
转换为
["The dog ran", "The cat jumped"]
并使用 spacy?

python spacy sentence text-segmentation
6个回答
35
投票

最新答案是这样的:

from __future__ import unicode_literals, print_function
from spacy.lang.en import English # updated

raw_text = 'Hello, world. Here are two sentences.'
nlp = English()
nlp.add_pipe('sentencizer')
doc = nlp(raw_text)
sentences = [sent.text.strip() for sent in doc.sents]

27
投票

回答

import spacy
nlp = spacy.load('en_core_web_sm')

text = 'My first birthday was great. My 2. was even better.'
sentences = [i for i in nlp(text).sents]

其他信息
这假设您已经在系统上安装了模型“en_core_web_sm”。如果没有,您可以通过在终端中运行以下命令来轻松安装它:

$ python -m spacy download en_core_web_sm

(请参阅此处了解所有可用型号的概述。)

根据您的数据,这可能比仅使用

spacy.lang.en.English
带来更好的结果。一个(非常简单的)比较示例:

import spacy
from spacy.lang.en import English

nlp_simple = English()
nlp_simple.add_pipe(nlp_simple.create_pipe('sentencizer'))

nlp_better = spacy.load('en_core_web_sm')


text = 'My first birthday was great. My 2. was even better.'

for nlp in [nlp_simple, nlp_better]:
    for i in nlp(text).sents:
        print(i)
    print('-' * 20)

输出:

>>> My first birthday was great.
>>> My 2.
>>> was even better.
>>> --------------------
>>> My first birthday was great.
>>> My 2. was even better.
>>> --------------------

17
投票

在 spacy 3.0.1 中,他们更改了管道。

from spacy.lang.en import English 

nlp = English()
nlp.add_pipe('sentencizer')


def split_in_sentences(text):
    doc = nlp(text)
    return [str(sent).strip() for sent in doc.sents]

13
投票

来自 spacy 的 github 支持页面

from __future__ import unicode_literals, print_function
from spacy.en import English

raw_text = 'Hello, world. Here are two sentences.'
nlp = English()
doc = nlp(raw_text)
sentences = [sent.string.strip() for sent in doc.sents]

3
投票

对于当前版本(例如 3.x 及更高版本),请使用下面的代码通过统计模型而不是基于规则的

sentencizer
组件获得最佳结果。

另请注意,如果您仅包含句子分离所需的管道组件,则可以加快处理速度并减少内存占用。

import spacy # instantiate pipeline with any model of your choosing nlp = spacy.load("en_core_web_sm") text = "The dog ran. The cat jumped. The 2. fox hides behind the house." # only select necessary pipeline components to speed up processing with nlp.select_pipes(enable=['tok2vec', "parser", "senter"]): doc = nlp(text) for sentence in doc.sents: print(sentence)
    

1
投票
更新以反映第一个答案中的评论

from spacy.lang.en import English raw_text = 'Hello, world. Here are two sentences.' nlp = English() nlp.add_pipe('sentencizer') doc = nlp(raw_text) sentences = [sent.text.strip() for sent in doc.sents]
    
© www.soinside.com 2019 - 2024. All rights reserved.