如何在NLTK中使用word_tokenize忽略单词之间的标点符号?

问题描述 投票:2回答:2

我正在使用NLTK word_tokenize忽略单词之间的字符。

如果我有一个句子:

test = 'Should I trade on the S&P? This works with a phone number 333-445-6635 and email [email protected]'

[word_tokenize方法将标准普尔分为

'S','&','P','?'

有没有一种方法可以使该库忽略单词或字母之间的标点符号?预期输出:'S&P','?'

python nlp nltk tokenize multi-word-expression
2个回答
3
投票

让我知道您的句子如何使用。我添加了一些带有标点符号的附加测试。在最后部分,从WordPunctTokenizer正则表达式修改了正则表达式。

from nltk.tokenize import RegexpTokenizer

punctuation = r'[]!"$%&\'()*+,./:;=#@?[\\^_`{|}~-]?'
tokenizer = RegexpTokenizer(r'\w+' + punctuation + r'\w+?|[^\s]+?')

# result: 
In [156]: tokenizer.tokenize(test)
Out[156]: ['Should', 'I', 'trade', 'on', 'the', 'S&P', '?']

# additional test:
In [225]: tokenizer.tokenize('"I am tired," she said.')
Out[225]: ['"', 'I', 'am', 'tired', ',', '"', 'she', 'said', '.']

编辑:要求有所更改,因此我们可以为此稍微修改PottsTweetTokenizer

emoticon_string = r"""
    (?:
      [<>]?
      [:;=8]                     # eyes
      [\-o\*\']?                 # optional nose
      [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth      
      |
      [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth
      [\-o\*\']?                 # optional nose
      [:;=8]                     # eyes
      [<>]?
    )"""
# Twitter symbols/cashtags:  # Added by awd, 20140410.
# Based upon Twitter's regex described here: <https://blog.twitter.com/2013/symbols-entities-tweets>.
cashtag_string = r"""(?:\$[a-zA-Z]{1,6}([._][a-zA-Z]{1,2})?)"""

# The components of the tokenizer:
regex_strings = (
    # Phone numbers:
    r"""
    (?:
      (?:            # (international)
        \+?[01]
        [\-\s.]*
      )?            
      (?:            # (area code)
        [\(]?
        \d{3}
        [\-\s.\)]*
      )?    
      \d{3}          # exchange
      [\-\s.]*   
      \d{4}          # base
    )"""
    ,
    # Emoticons:
    emoticon_string
    ,
    # HTML tags:
    r"""(?:<[^>]+>)"""
    ,
    # URLs:
    r"""(?:http[s]?://t.co/[a-zA-Z0-9]+)"""
    ,
    # Twitter username:
    r"""(?:@[\w_]+)"""
    ,
    # Twitter hashtags:
    r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)"""
    ,
    # Twitter symbols/cashtags:
    cashtag_string
    ,
    # email addresses
    r"""(?:[\w.+-]+@[\w-]+\.(?:[\w-]\.?)+[\w-])""",
    # Remaining word types:
    r"""
    (?:[a-z][^\s]+[a-z])           # Words with punctuation (modification here).
    |
    (?:[+\-]?\d+[,/.:-]\d+[+\-]?)  # Numbers, including fractions, decimals.
    |
    (?:[\w_]+)                     # Words without apostrophes or dashes.
    |
    (?:\.(?:\s*\.){1,})            # Ellipsis dots. 
    |
    (?:\S)                         # Everything else that isn't whitespace.
    """
    )
word_re = re.compile(r"""(%s)""" % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE)
# The emoticon and cashtag strings get their own regex so that we can preserve case for them as needed:
emoticon_re = re.compile(emoticon_string, re.VERBOSE | re.I | re.UNICODE)
cashtag_re = re.compile(cashtag_string, re.VERBOSE | re.I | re.UNICODE)

# These are for regularizing HTML entities to Unicode:
html_entity_digit_re = re.compile(r"&#\d+;")
html_entity_alpha_re = re.compile(r"&\w+;")
amp = "&amp;"

class CustomTweetTokenizer(object):
    def __init__(self, *, preserve_case: bool=False):
        self.preserve_case = preserve_case

    def tokenize(self, tweet: str) -> list:
        """
        Argument: tweet -- any string object.
        Value: a tokenized list of strings; concatenating this list returns the original string if preserve_case=True
        """
        # Fix HTML character entitites:
        tweet = self._html2unicode(tweet)
        # Tokenize:
        matches = word_re.finditer(tweet)
        if self.preserve_case:
            return [match.group() for match in matches]
        return [self._normalize_token(match.group()) for match in matches]

    @staticmethod
    def _normalize_token(token: str) -> str:

        if emoticon_re.search(token):
            # Avoid changing emoticons like :D into :d
            return token
        if token.startswith('$') and cashtag_re.search(token):
            return token.upper()
        return token.lower()

    @staticmethod
    def _html2unicode(tweet: str) -> str:
        """
        Internal method that seeks to replace all the HTML entities in
        tweet with their corresponding unicode characters.
        """
        # First the digits:
        ents = set(html_entity_digit_re.findall(tweet))
        if len(ents) > 0:
            for ent in ents:
                entnum = ent[2:-1]
                try:
                    entnum = int(entnum)
                    tweet = tweet.replace(ent, chr(entnum))
                except:
                    pass
        # Now the alpha versions:
        ents = set(html_entity_alpha_re.findall(tweet))
        ents = filter((lambda x: x != amp), ents)
        for ent in ents:
            entname = ent[1:-1]
            try:
                tweet = tweet.replace(ent, chr(html.entities.name2codepoint[entname]))
            except:
                pass
            tweet = tweet.replace(amp, " and ")
        return tweet

要测试:

tknzr = CustomTweetTokenizer(preserve_case=True)
tknzr.tokenize(test)

# result:
['Should',
 'I',
 'trade',
 'on',
 'the',
 'S&P',
 '?',
 'This',
 'works',
 'with',
 'a',
 'phone',
 'number',
 '333-445-6635',
 'and',
 'email',
 '[email protected]']

0
投票

关注@mechanical_meat答案,

NLTK中有一个Twitter文本标记器

[很可能是从PottsTweetTokenizer处的https://github.com/nltk/nltk/blob/develop/nltk/tokenize/casual.py派生的] >>

from nltk.tokenize import TweetTokenizer

tt = TweetTokenizer()
text = 'Should I trade on the S&P? This works with a phone number 333-445-6635 and email [email protected]'
print(tt.tokenize(text))

[out]:

['Should', 'I', 'trade', 'on', 'the', 'S', '&', 'P', '?', 'This', 'works', 'with', 'a', 'phone', 'number', '333-445-6635', 'and', 'email', '[email protected]']

但是那不能解决S&P问题!

因此您可以尝试使用多词表达方法,请参见https://stackoverflow.com/a/55644296/610569

from nltk import word_tokenize
from nltk.tokenize import TweetTokenizer
from nltk.tokenize import MWETokenizer

def multiword_tokenize(text, mwe, tokenize_func=word_tokenize):
    # Initialize the MWETokenizer
    protected_tuples = [tokenize_func(word) for word in mwe]
    protected_tuples_underscore = ['_'.join(word) for word in protected_tuples]
    tokenizer = MWETokenizer(protected_tuples)
    # Tokenize the text.
    tokenized_text = tokenizer.tokenize(tokenize_func(text))
    # Replace the underscored protected words with the original MWE
    for i, token in enumerate(tokenized_text):
        if token in protected_tuples_underscore:
            tokenized_text[i] = mwe[protected_tuples_underscore.index(token)]
    return tokenized_text


text = 'Should I trade on the S&P? This works with a phone number 333-445-6635 and email [email protected]'
mwe = ['S&P']

tt = TweetTokenizer()
print(multiword_tokenize(text, mwe, tt.tokenize))

[out]:

['Should', 'I', 'trade', 'on', 'the', 'S&P', '?', 'This', 'works', 'with', 'a', 'phone', 'number', '333-445-6635', 'and', 'email', '[email protected]']
© www.soinside.com 2019 - 2024. All rights reserved.