我有一些带注释的HTML源代码,其中代码类似于使用requests
所获得的代码,注释是带有标记项开始的字符索引的标签。
例如,源代码可以是:
<body><text>Hello world!</text><text>This is my code. And this is a number 42</text></body>
标签可以是例如:
[{'label':'salutation', 'start':12, 'end':25},
{'label':'verb', 'start':42, 'end':45},
{'label':'size', 'start':75, 'end':78}]
参考“Hello world”,“is”和“42”。我们事先知道标签不重叠。
我想处理源代码和注释,以生成适合HTML格式的令牌列表。
例如,它可以在这里产生这样的东西:
['<body>', '<text>', 'hello', 'world', '</text>', '<text>', 'this', 'is', 'my', 'code', 'and', 'this', 'is', 'a', 'number', '[NUMBER]', '</text>', '</body>']
此外,它必须将注释映射到标记化,从而生成与标记化长度相同的标签序列,例如:
['NONE', 'NONE', 'salutation', 'salutation', 'NONE', 'NONE', 'NONE', 'verb', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'size', 'NONE', 'NONE']
在Python中完成此操作的最简单方法是什么?
您可以使用BeautifulSoup
的递归来生成所有标签和内容的列表,然后可以使用它们来匹配标签:
from bs4 import BeautifulSoup as soup
import re
content = '<body><text>Hello world!</text><text>This is my code. And this is a number 42</text></body>'
def tokenize(d):
yield f'<{d.name}>'
for i in d.contents:
if not isinstance(i, str):
yield from tokenize(i)
else:
yield from i.split()
yield f'</{d.name}>'
data = list(tokenize(soup(content, 'html.parser').body))
输出:
['<body>', '<text>', 'Hello', 'world!', '</text>', '<text>', 'This', 'is', 'my', 'code.', 'And', 'this', 'is', 'a', 'number', '42', '</text>', '</body>']
然后,匹配标签:
labels = [{'label':'salutation', 'start':12, 'end':25}, {'label':'verb', 'start':42, 'end':45}, {'label':'size', 'start':75, 'end':78}]
tokens = [{**i, 'word':content[i['start']:i['end']-1].split()} for i in labels]
indices = {i:iter([[c, c+len(i)+1] for c in range(len(content)) if re.findall('^\W'+i, content[c-1:])]) for i in data}
new_data = [[i, next(indices[i], None)] for i in data]
result = [(lambda x:'NONE' if not x else x[0])([c['label'] for c in tokens if b and c['start'] <= b[0] and b[-1] <= c['end']]) for a, b in new_data]
输出:
['NONE', 'NONE', 'salutation', 'salutation', 'NONE', 'NONE', 'NONE', 'verb', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'NONE', 'size', 'NONE', 'NONE']
目前我已经使用HTMLParser完成了这项工作:
from html.parser import HTMLParser
from tensorflow.keras.preprocessing.text import text_to_word_sequence
class HTML_tokenizer_labeller(HTMLParser):
def __init__(self, annotations, *args, **kwargs):
super(HTML_tokenizer_labeller, self).__init__(*args, **kwargs)
self.tokens = []
self.labels = []
self.annotations = annotations
def handle_starttag(self, tag, attrs):
self.tokens.append(f'<{tag}>')
self.labels.append('OTHER')
def handle_endtag(self, tag):
self.tokens.append(f'</{tag}>')
self.labels.append('OTHER')
def handle_data(self, data):
print(f"getpos = {self.getpos()}")
tokens = text_to_word_sequence(data)
pos = self.getpos()[1]
for annotation in annotations:
if annotation['start'] <= pos <= annotation['end']:
label = annotation['tag']
break
else: label = 'OTHER'
self.tokens += tokens
self.labels += [label] * len(tokens)