将文本DF分解为单个句子DF:如何使用lambda创建更长的熊猫数据框并应用?

问题描述 投票:0回答:1

这个问题可能看起来很长,但我保证它确实不会复杂。

我有一个带文本块和一些ID列的DF。我想创建一个新的DF,其中包含每个句子作为自己的行。

original_df = pd.DataFrame(data={"year":[2018,2019], "text_nr":[1,2], "text":["This is one sentence. This is another!","Please help me. I am lost. "]})
original_df
>>>
       year  text_nr  text
    0  2018  1        "This is one sentence. This is another!"
    1  2019  2        "Please help me. I am lost."

我想使用spacy将每个文本块分成单个句子,并创建一个新的DF,如下所示:

sentences_df
>>>
   year  text_nr  sent_nr sentence
0  2018      1       1   "This is one sentence". 
1  2018      1       2   "This is another!"
2  2019      2       1   "Please help me."
3  2019      2       2   "I am lost."

我已经找到了这样的方法:

import spacy
nlp = spacy.load("en_core_news_sm")
sentences_list = []

for i, row in original_df.iterrows():
    doc = nlp(row["text"])
    sentences = [(row["year"],row["text_nr"],str(i+1),sent.string.replace('\n','').replace('\t','').strip()) for i, sent in enumerate(doc.sents)]
    sentences_list = sentences_list+sentences

sentences_df = pd.DataFrame(sentences_list, columns = ["year",text_nr","sent_nr","sentence"])

但是不是很优雅,我读到df.apply(lambda: ...)方法要快得多。但是,当我尝试时,我始终无法获得正确的结果。我尝试了以下两种方法:

  1. 第一次尝试:
nlp = spacy.load("en_core_news_sm")
def sentencizer (x, nlp_model):
    sentences = {}
    doc = nlp_model(x["text"])
    for i, sent in enumerate(doc.sents):
        sentences["year"]=x["year"]
        sentences["text_nr"]=x["text_nr"]
        sentences["sent_nr"] = str(i+1)
        sentences["sentence"] = sent.string.replace('\n','').replace('\t','').strip()
    return sentences
sentences_df = original_df.head().apply(lambda x: pd.Series(sentencizer(x,nlp)),axis=1)

这只会得到最后一句话

sentences_df
>>>
   year  text_nr sent_nr  sentence
0  2018        1       2  "This is another!"
1  2019        2       2  "I am lost!"
  1. 第二次尝试
nlp = spacy.load("en_core_news_sm")
def sentencizer (x, nlp_model):
    sentences = {"year":[],"text_nr":[],"sent_nr":[],"sentence":[]}
    doc = nlp_model(x["text"])
    for i, sent in enumerate(doc.sents):
        sentences["year"].append(x["year"])
        sentences["text_nr"].append(x["text_nr"])
        sentences["sent_nr"].append(str(i+1))
        sentences["sentence"].append(sent.string.replace('\n','').replace('\t','').strip())
    return sentences
sentences_df = original_df.apply(lambda x: pd.Series(sentencizer(x,nlp)),axis=1)

这将为我提供一个以列表作为条目的DF:

sentences_df
>>>
   year          text_nr sent_nr    sentence
0  [2018, 2018]  [1, 1]  [1, 2]  ["This is one sentence.", "This is another!"]
1  [2019, 2019]  [2, 2]  [1, 2]  ["Please help me.", "I am lost."]

我可能会尝试扩展最后一个df,但是我确信有一种方法可以一次性正确地执行此操作。我想使用spacy来分割文本,因为它具有比仅使用正则表达式/字符串分割更高级的句子边界检测功能。谢谢你的帮助。我对编程很陌生,所以请留意:)

python pandas dataframe lambda spacy
1个回答
1
投票

沿着这条线可能起作用:

# update punctuations list if needed
punctuations = '\.\!\?'
(original_df.drop('text',axis=1)
    .merge(original_df.text
               .str.extractall(f'(?P<sentence>[^{punctuations}]+[{punctuations}])\s?')
               .reset_index('match'),
           left_index=True, right_index=True, how='left')
)

输出:

   year  text_nr  match               sentence
0  2018        1      0  This is one sentence.
0  2018        1      1       This is another!
1  2019        2      0        Please help me.
1  2019        2      1             I am lost.
© www.soinside.com 2019 - 2024. All rights reserved.