示例urllib3和python中的线程

问题描述 投票:2回答:4

我试图在简单的线程中使用urllib3来获取几个wiki页面。脚本会

为每个线程创建1个连接(我不明白为什么)并永远挂起。 urllib3和线程的任何提示,建议或简单示例

import threadpool
from urllib3 import connection_from_url

HTTP_POOL = connection_from_url(url, timeout=10.0, maxsize=10, block=True)

def fetch(url, fiedls):
  kwargs={'retries':6}
  return HTTP_POOL.get_url(url, fields, **kwargs)

pool = threadpool.ThreadPool(5)
requests = threadpool.makeRequests(fetch, iterable)
[pool.putRequest(req) for req in requests]

@ Lennart的脚本出现了这个错误:

http://en.wikipedia.org/wiki/2010-11_Premier_LeagueTraceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
 http://en.wikipedia.org/wiki/List_of_MythBusters_episodeshttp://en.wikipedia.org/wiki/List_of_Top_Gear_episodes http://en.wikipedia.org/wiki/List_of_Unicode_characters    result = request.callable(*request.args, **request.kwds)
  File "crawler.py", line 9, in fetch
    print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
    result = request.callable(*request.args, **request.kwds)
  File "crawler.py", line 9, in fetch
    print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
    result = request.callable(*request.args, **request.kwds)
  File "crawler.py", line 9, in fetch
    print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/threadpool.py", line 156, in run
    result = request.callable(*request.args, **request.kwds)
  File "crawler.py", line 9, in fetch
    print url, conn.get_url(url)
AttributeError: 'HTTPConnectionPool' object has no attribute 'get_url'

添加import threadpool; import urllib3tpool = threadpool.ThreadPool(4) @ user318904的代码后出现此错误:

Traceback (most recent call last):
  File "crawler.py", line 21, in <module>
    tpool.map_async(fetch, urls)
AttributeError: ThreadPool instance has no attribute 'map_async'
python multithreading http urllib2 urllib3
4个回答
1
投票

显然它会为每个线程创建一个连接,每个线程应该如何才能获取页面?并且您尝试使用由一个URL构成的相同连接,用于所有URL。这很难成为你的意图。

这段代码运行得很好:

import threadpool
from urllib3 import connection_from_url

def fetch(url):
  kwargs={'retries':6}
  conn = connection_from_url(url, timeout=10.0, maxsize=10, block=True)
  print url, conn.get_url(url)
  print "Done!"

pool = threadpool.ThreadPool(4)
urls = ['http://en.wikipedia.org/wiki/2010-11_Premier_League',
        'http://en.wikipedia.org/wiki/List_of_MythBusters_episodes',
        'http://en.wikipedia.org/wiki/List_of_Top_Gear_episodes',
        'http://en.wikipedia.org/wiki/List_of_Unicode_characters',
        ]
requests = threadpool.makeRequests(fetch, urls)

[pool.putRequest(req) for req in requests]
pool.wait()

1
投票

线程编程很难,所以我写了workerpool来使你正在做的更容易。

更具体地说,请参阅Mass Downloader示例。

要用urllib3做同样的事情,它看起来像这样:

import urllib3
import workerpool

pool = urllib3.connection_from_url("foo", maxsize=3)

def download(url):
    r = pool.get_url(url)
    # TODO: Do something with r.data
    print "Downloaded %s" % url

# Initialize a pool, 5 threads in this case
pool = workerpool.WorkerPool(size=5)

# The ``download`` method will be called with a line from the second 
# parameter for each job.
pool.map(download, open("urls.txt").readlines())

# Send shutdown jobs to all threads, and wait until all the jobs have been completed
pool.shutdown()
pool.wait()

有关更复杂的代码,请查看workerpool.EquippedWorker(以及the tests here示例用法)。你可以让游泳池成为你传入的toolbox


1
投票

这是我的看法,一个使用Python3和concurrent.futures.ThreadPoolExecutor的更新的解决方案。

import urllib3
from concurrent.futures import ThreadPoolExecutor

urls = ['http://en.wikipedia.org/wiki/2010-11_Premier_League',
        'http://en.wikipedia.org/wiki/List_of_MythBusters_episodes',
        'http://en.wikipedia.org/wiki/List_of_Top_Gear_episodes',
        'http://en.wikipedia.org/wiki/List_of_Unicode_characters',
        ]

def download(url, cmanager):
    response = cmanager.request('GET', url)
    if response and response.status == 200:
        print("+++++++++ url: " + url)
        print(response.data[:1024])

connection_mgr = urllib3.PoolManager(maxsize=5)
thread_pool = ThreadPoolExecutor(5)
for url in urls:
    thread_pool.submit(download, url, connection_mgr)

Some remarks

  • 我的代码是基于Beazley和Jones的Python Cookbook的类似例子。
  • 我特别喜欢你除了urllib3之外你只需要一个标准模块。
  • 设置非常简单,如果你只是在download中寻找副作用(比如打印,保存到文件等),加入线程就没有额外的努力了。
  • 如果你想要不同的东西,ThreadPoolExecutor.submit实际上会返回download将返回的任何东西,包裹在Future中。
  • 我发现将线程池中的线程数与连接池中的HTTPConnection数量对齐(通过maxsize)会很有帮助。否则,当所有线程尝试访问同一服务器时,您可能会遇到(无害)警告(如示例中所示)。

-1
投票

我使用这样的东西:

#excluding setup for threadpool etc

upool = urllib3.HTTPConnectionPool('en.wikipedia.org', block=True)

urls = ['/wiki/2010-11_Premier_League',
        '/wiki/List_of_MythBusters_episodes',
        '/wiki/List_of_Top_Gear_episodes',
        '/wiki/List_of_Unicode_characters',
        ]

def fetch(path):
    # add error checking
    return pool.get_url(path).data

tpool = ThreadPool()

tpool.map_async(fetch, urls)

# either wait on the result object or give map_async a callback function for the results
© www.soinside.com 2019 - 2024. All rights reserved.