如何将spacy tokenize hashtag作为一个整体?

问题描述 投票:2回答:4

在包含主题标签的句子中,例如推文,spacy的标记生成器将主题标签拆分为两个标记:

import spacy
nlp = spacy.load('en')
doc = nlp(u'This is a #sentence.')
[t for t in doc]

输出:

[This, is, a, #, sentence, .]

我想将标签符号化为:

[This, is, a, #sentence, .]

那可能吗?

谢谢

tokenize spacy
4个回答
2
投票
  1. 您可以执行一些前后字符串操作,这将使您绕过基于“#”的标记化,并且易于实现。例如
> >>> import re
> >>> import spacy
> >>> nlp = spacy.load('en')
> >>> sentence = u'This is my twitter update #MyTopic'
> >>> parsed = nlp(sentence)
> >>> [token.text for token in parsed]
 [u'This', u'is', u'my', u'twitter', u'update', u'#', u'MyTopic']
> >>> new_sentence = re.sub(r'#(\w+)',r'ZZZPLACEHOLDERZZZ\1',sentence) 
> >>> new_sentence u'This is my twitter update ZZZPLACEHOLDERZZZMyTopic'
> >>> parsed = nlp(new_sentence)
> >>> [token.text for token in parsed]
 [u'This', u'is', u'my', u'twitter', u'update', u'ZZZPLACEHOLDERZZZMyTopic']
> >>> [x.replace(u'ZZZPLACEHOLDERZZZ','#') for x in [token.text for token in parsed]]
 [u'This', u'is', u'my', u'twitter', u'update', u'#MyTopic']
  1. 您可以尝试在spacy tokenizer中设置自定义分隔符。我不知道这样做的方法。

更新:您可以使用正则表达式来查找您希望保留为单个标记的标记范围,并使用如下所述的span.merge方法进行重新标记:https://spacy.io/docs/api/span#merge

合并示例:

>>> import spacy
>>> import re
>>> nlp = spacy.load('en')
>>> my_str = u'Tweet hashtags #MyHashOne #MyHashTwo'
>>> parsed = nlp(my_str)
>>> [(x.text,x.pos_) for x in parsed]
[(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#', u'NOUN'), (u'MyHashOne', u'NOUN'), (u'#', u'NOUN'), (u'MyHashTwo', u'PROPN')]
>>> indexes = [m.span() for m in re.finditer('#\w+',my_str,flags=re.IGNORECASE)]
>>> indexes
[(15, 25), (26, 36)]
>>> for start,end in indexes:
...     parsed.merge(start_idx=start,end_idx=end)
... 
#MyHashOne
#MyHashTwo
>>> [(x.text,x.pos_) for x in parsed]
[(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#MyHashOne', u'NOUN'), (u'#MyHashTwo', u'PROPN')]
>>> 

1
投票

这更像是@DhruvPathak和来自下面链接的github线程的无耻副本(以及@csvance更好的答案)的大答案的附加组件。 spaCy功能(自V2.0以来)add_pipe方法。这意味着您可以在函数中定义@DhruvPathak的最佳答案,并将步骤(方便地)添加到您的nlp处理管道中,如下所示。

引文从这里开始:

def hashtag_pipe(doc):
    merged_hashtag = False
    while True:
        for token_index,token in enumerate(doc):
            if token.text == '#':
                if token.head is not None:
                    start_index = token.idx
                    end_index = start_index + len(token.head.text) + 1
                    if doc.merge(start_index, end_index) is not None:
                        merged_hashtag = True
                        break
        if not merged_hashtag:
            break
        merged_hashtag = False
    return doc

nlp = spacy.load('en')
nlp.add_pipe(hashtag_pipe)

doc = nlp("twitter #hashtag")
assert len(doc) == 2
assert doc[0].text == 'twitter'
assert doc[1].text == '#hashtag'

引文在这里结束;查看how to add hashtags to the part of speech tagger #503的完整主题。

PS读取代码时很清楚,但对于复制和粘贴,请不要禁用解析器:)


1
投票

我在github上找到了这个,它使用spaCy的Matcher

from spacy.matcher import Matcher

matcher = Matcher(nlp.vocab)
matcher.add('HASHTAG', None, [{'ORTH': '#'}, {'IS_ASCII': True}])

doc = nlp('This is a #sentence. Here is another #hashtag. #The #End.')
matches = matcher(doc)
hashtags = []
for match_id, start, end in matches:
    hashtags.append(doc[start:end])

for span in hashtags:
    span.merge()

print([t.text for t in doc])

输出:

['This', 'is', 'a', '#sentence', '.', 'Here', 'is', 'another', '#hashtag', '.', '#The', '#End', '.']

hashtags列表中还提供了一个主题标签列表:

print(hashtags)

输出:

[#sentence, #hashtag, #The, #End]


0
投票

我花了相当多的时间在这上面,发现我分享了我的想法:对Tokenizer进行子类化并将哈希标签的正则表达式添加到默认的URL_PATTERN对我来说是最简单的解决方案,另外添加自定义扩展以匹配主题标签以识别他们:

import re
import spacy
from spacy.language import Language
from spacy.tokenizer import Tokenizer
from spacy.tokens import Token

nlp = spacy.load('en_core_web_sm')

def create_tokenizer(nlp):
    # contains the regex to match all sorts of urls:
    from spacy.lang.tokenizer_exceptions import URL_PATTERN

    # spacy defaults: when the standard behaviour is required, they
    # need to be included when subclassing the tokenizer
    prefix_re = spacy.util.compile_prefix_regex(Language.Defaults.prefixes)
    infix_re = spacy.util.compile_infix_regex(Language.Defaults.infixes)
    suffix_re = spacy.util.compile_suffix_regex(Language.Defaults.suffixes)

    # extending the default url regex with regex for hashtags with "or" = |
    hashtag_pattern = r'''|^(#[\w_-]+)$'''
    url_and_hashtag = URL_PATTERN + hashtag_pattern
    url_and_hashtag_re = re.compile(url_and_hashtag)

    # set a custom extension to match if token is a hashtag
    hashtag_getter = lambda token: token.text.startswith('#')
    Token.set_extension('is_hashtag', getter=hashtag_getter)

    return Tokenizer(nlp.vocab, prefix_search=prefix_re.search,
                     suffix_search=suffix_re.search,
                     infix_finditer=infix_re.finditer,
                     token_match=url_and_hashtag_re.match
                     )

nlp.tokenizer = create_tokenizer(nlp)
doc = nlp("#spreadhappiness #smilemore [email protected] https://www.somedomain.com/foo")

for token in doc:
    print(token.text)
    if token._.is_hashtag:
        print("-> matches hashtag")

# returns: "#spreadhappiness -> matches hashtag #smilemore -> matches hashtag [email protected] https://www.somedomain.com/foo"
© www.soinside.com 2019 - 2024. All rights reserved.