相同的代码给出不同的输出取决于它是否具有列表推导或生成器

问题描述 投票:1回答:1

我正在努力清理这个网站并获得每一个字。但使用生成器比使用列表给我更多的单词。而且,这些词是不一致的。有时我有1个单词,有时没有,有时超过30个单词。我已经阅读了有关python文档的生成器,并查找了有关生成器的一些问题。我的理解是它应该没有区别。我不明白引擎盖下发生了什么。我正在使用python 3.6。我也读过Generator Comprehension different output from list comprehension?,但我无法理解这种情况。

这是发电机的第一个功能。

def text_cleaner1(website):
    '''
    This function just cleans up the raw html so that I can look at it.
    Inputs: a URL to investigate
    Outputs: Cleaned text only
    '''
    try:
        site = requests.get(url).text # Connect to the job posting
    except: 
        return   # Need this in case the website isn't there anymore or some other weird connection problem 

    soup_obj = BeautifulSoup(site, "lxml") # Get the html from the site


    for script in soup_obj(["script", "style"]):
        script.extract() # Remove these two elements from the BS4 object

    text = soup_obj.get_text() # Get the text from this

    lines = (line.strip() for line in text.splitlines()) # break into lines

    print(type(lines))

    chunks = (phrase.strip() for line in lines for phrase in line.split("  ")) # break multi-headlines into a line each

    print(type(chunks))

    def chunk_space(chunk):
        chunk_out = chunk + ' ' # Need to fix spacing issue
        return chunk_out  

    text = ''.join(chunk_space(chunk) for chunk in chunks if chunk).encode('utf-8') # Get rid of all blank lines and ends of line

    # Now clean out all of the unicode junk (this line works great!!!)


    try:
        text = text.decode('unicode_escape').encode('ascii', 'ignore') # Need this as some websites aren't formatted
    except:                                                            # in a way that this works, can occasionally throw
        return                                                         # an exception  

    text = str(text)

    text = re.sub("[^a-zA-Z.+3]"," ", text)  # Now get rid of any terms that aren't words (include 3 for d3.js)
                                             # Also include + for C++


    text = text.lower().split()  # Go to lower case and split them apart


    stop_words = set(stopwords.words("english")) # Filter out any stop words
    text = [w for w in text if not w in stop_words]



    text = set(text) # Last, just get the set of these. Ignore counts (we are just looking at whether a term existed
                            # or not on the website)

    return text

这是列表推导的第二个功能。

def text_cleaner2(website):
    '''
    This function just cleans up the raw html so that I can look at it.
    Inputs: a URL to investigate
    Outputs: Cleaned text only
    '''
    try:
        site = requests.get(url).text # Connect to the job posting
    except: 
        return   # Need this in case the website isn't there anymore or some other weird connection problem 

    soup_obj = BeautifulSoup(site, "lxml") # Get the html from the site


    for script in soup_obj(["script", "style"]):
        script.extract() # Remove these two elements from the BS4 object

    text = soup_obj.get_text() # Get the text from this

    lines = [line.strip() for line in text.splitlines()] # break into lines

    chunks = [phrase.strip() for line in lines for phrase in line.split("  ")] # break multi-headlines into a line each

    def chunk_space(chunk):
        chunk_out = chunk + ' ' # Need to fix spacing issue
        return chunk_out  

    text = ''.join(chunk_space(chunk) for chunk in chunks if chunk).encode('utf-8') # Get rid of all blank lines and ends of line

    # Now clean out all of the unicode junk (this line works great!!!)


    try:
        text = text.decode('unicode_escape').encode('ascii', 'ignore') # Need this as some websites aren't formatted
    except:                                                            # in a way that this works, can occasionally throw
        return                                                         # an exception  

    text = str(text)

    text = re.sub("[^a-zA-Z.+3]"," ", text)  # Now get rid of any terms that aren't words (include 3 for d3.js)
                                             # Also include + for C++


    text = text.lower().split()  # Go to lower case and split them apart


    stop_words = set(stopwords.words("english")) # Filter out any stop words
    text = [w for w in text if not w in stop_words]



    text = set(text) # Last, just get the set of these. Ignore counts (we are just looking at whether a term existed
                            # or not on the website)

    return text

这段代码随机给出了不同的结果。

text_cleaner1("https://www.indeed.com/rc/clk?jk=02ecc871f377f959&fccid=c46d0116f6e69eae") - text_cleaner2("https://www.indeed.com/rc/clk?jk=02ecc871f377f959&fccid=c46d0116f6e69eae")
python web-scraping beautifulsoup generator list-comprehension
1个回答
0
投票

生成器是"lazy" - 它不会立即执行代码,但是稍后在需要结果时执行它。这意味着它不会立即从变量或函数中获取值,但它会保留对变量和函数的引用。

链接示例

all_configs = [
    {'a': 1, 'b':3},
    {'a': 2, 'b':2}
]
unique_keys = ['a','b']


for x in zip( *([c[k] for k in unique_keys] for c in all_configs) ):
    print(x)

print('---')
for x in zip( *((c[k] for k in unique_keys) for c in all_configs) ):
    print(list(x))

在发电机中,在另一个for环路内有for环路。

内部生成器在c中引用c而不是值,它将在以后获得值。

之后(当必须从生成器获得结果时)它开始使用外部生成器for c in all_configs执行。当外部生成器被执行时,它循环并生成两个内部发生器,它们使用c的参考,而不是来自c的值,但是当它循环时它也会改变c中的值 - 所以最后你在{'a': 2, 'b':2}中有两个内部生成器和c的“列表”。

之后它执行内部发生器,最终从c获得价值,但在这一刻c已经有{'a': 2, 'b':2}


BTW:当你在lambda中使用按钮时,for循环中的tkinter存在类似的问题。

© www.soinside.com 2019 - 2024. All rights reserved.