Queue
範例 - 一個並行的網路爬蟲¶
Tornado 的 tornado.queues
模組(以及非常相似的 Queue
類別在 asyncio
中)實現了一個用於協程的非同步生產者/消費者模式,類似於 Python 標準函式庫 queue
模組為執行緒實現的模式。
一個 yield Queue.get
的協程會暫停,直到佇列中有項目。如果佇列設定了最大大小,一個 yield Queue.put
的協程會暫停,直到有空間容納另一個項目。
Queue
維護一個未完成任務的計數,起始值為零。put
會遞增計數;task_done
會遞減計數。
在此網路爬蟲範例中,佇列開始時只包含 base_url。當一個 worker 抓取一個頁面時,它會解析連結並將新的連結放入佇列中,然後呼叫 task_done
來遞減計數器一次。最終,一個 worker 會抓取一個頁面,其 URL 都已被看過,且佇列中也沒有剩餘工作。因此,該 worker 呼叫 task_done
會將計數器遞減至零。主協程正在等待 join
,會解除暫停並結束。
#!/usr/bin/env python3
import asyncio
import time
from datetime import timedelta
from html.parser import HTMLParser
from urllib.parse import urljoin, urldefrag
from tornado import gen, httpclient, queues
base_url = "https://tornado.dev.org.tw/en/stable/"
concurrency = 10
async def get_links_from_url(url):
"""Download the page at `url` and parse it for links.
Returned links have had the fragment after `#` removed, and have been made
absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes
'https://tornado.dev.org.tw/en/stable/gen.html'.
"""
response = await httpclient.AsyncHTTPClient().fetch(url)
print("fetched %s" % url)
html = response.body.decode(errors="ignore")
return [urljoin(url, remove_fragment(new_url)) for new_url in get_links(html)]
def remove_fragment(url):
pure_url, frag = urldefrag(url)
return pure_url
def get_links(html):
class URLSeeker(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.urls = []
def handle_starttag(self, tag, attrs):
href = dict(attrs).get("href")
if href and tag == "a":
self.urls.append(href)
url_seeker = URLSeeker()
url_seeker.feed(html)
return url_seeker.urls
async def main():
q = queues.Queue()
start = time.time()
fetching, fetched, dead = set(), set(), set()
async def fetch_url(current_url):
if current_url in fetching:
return
print("fetching %s" % current_url)
fetching.add(current_url)
urls = await get_links_from_url(current_url)
fetched.add(current_url)
for new_url in urls:
# Only follow links beneath the base URL
if new_url.startswith(base_url):
await q.put(new_url)
async def worker():
async for url in q:
if url is None:
return
try:
await fetch_url(url)
except Exception as e:
print("Exception: %s %s" % (e, url))
dead.add(url)
finally:
q.task_done()
await q.put(base_url)
# Start workers, then wait for the work queue to be empty.
workers = gen.multi([worker() for _ in range(concurrency)])
await q.join(timeout=timedelta(seconds=300))
assert fetching == (fetched | dead)
print("Done in %d seconds, fetched %s URLs." % (time.time() - start, len(fetched)))
print("Unable to fetch %s URLs." % len(dead))
# Signal all the workers to exit.
for _ in range(concurrency):
await q.put(None)
await workers
if __name__ == "__main__":
asyncio.run(main())