亚洲精品久久久中文字幕-亚洲精品久久片久久-亚洲精品久久青草-亚洲精品久久婷婷爱久久婷婷-亚洲精品久久午夜香蕉

您的位置:首頁(yè)技術(shù)文章
文章詳情頁(yè)

python - scrapy中使用CrawlSpider,匹配不到urls

瀏覽:61日期:2022-07-18 10:45:15

問(wèn)題描述

我的爬蟲(chóng)代碼如下,其中rules無(wú)獲取,不知道是什么問(wèn)題?

#encoding: utf-8import reimport requestsimport timefrom bs4 import BeautifulSoupimport scrapyfrom scrapy.http import Requestfrom craler.items import CralerItemimport urllib2from scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import LinkExtractorfrom scrapy.contrib.linkextractors.sgml import SgmlLinkExtractorclass MoyanSpider(CrawlSpider): try:name = ’maoyan’allowed_domains = ['http://maoyan.com']start_urls = [’http://maoyan.com/films’]rules = ( Rule(LinkExtractor(allow=(r'films/d+.*')), callback=’parse_item’, follow=True),) except Exception, e:print e.message # # def start_requests(self): # for i in range(22863): # url = self.start_urls + str(i*30) # # yield Request(url,self.parse, headers=self.headers) def parse_item(self, response):item = CralerItem()# time.sleep(2)# moveis = BeautifulSoup(response.text, ’lxml’).find('p',class_='movies-list').find_all('dd') try: time.sleep(2) item[’name’] = response.find('p',class_='movie-brief-container').find('h3',class_='name').get_text() item[’score’] = response.find('p',class_='movie-index-content score normal-score').find('span',class_='stonefont').get_text() url = 'http://maoyan.com'+response.find('p',class_='channel-detail movie-item-title').find('a')['href'] #item[’url’] = url item[’id’] = response.url.split('/')[-1] # html = requests.get(url).content # soup = BeautifulSoup(html,’lxml’) temp= response.find('p','movie-brief-container').find('ul').get_text() temp = temp.split(’n’) #item[’cover’] = soup.find('p','avater-shadow').find('img')['src'] item[’tags’] = temp[1] item[’countries’] = temp[3].strip() item[’duration’] = temp[4].split(’/’)[-1] item[’time’] = temp[6] #print item[’name’] return itemexcept Exception, e: print e.message

運(yùn)行報(bào)錯(cuò)的提醒:

C:Python27python.exe 'C:Program Files (x86)JetBrainsPyCharm Community Edition 2016.2.2helperspydevpydevd.py' --multiproc --qt-support --client 127.0.0.1 --port 12779 --file D:/scrapy/craler/entrypoint.pypydev debugger: process 30468 is connectingConnected to pydev debugger (build 162.1967.10)D:/scrapy/cralercralerspidersmaoyan.py:12: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractorD:/scrapy/cralercralerspidersmaoyan.py:12: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors.sgml` is deprecated, use `scrapy.linkextractors.sgml` instead from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor2017-05-08 21:58:14 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: craler)2017-05-08 21:58:14 [scrapy.utils.log] INFO: Overridden settings: {’NEWSPIDER_MODULE’: ’craler.spiders’, ’ROBOTSTXT_OBEY’: True, ’SPIDER_MODULES’: [’craler.spiders’], ’HTTPCACHE_ENABLED’: True, ’BOT_NAME’: ’craler’, ’COOKIES_ENABLED’: False, ’DOWNLOAD_DELAY’: 3}2017-05-08 21:58:14 [scrapy.middleware] INFO: Enabled extensions:[’scrapy.extensions.logstats.LogStats’, ’scrapy.extensions.telnet.TelnetConsole’, ’scrapy.extensions.corestats.CoreStats’]2017-05-08 21:58:14 [py.warnings] WARNING: D:/scrapy/cralercralermiddlewares.py:11: ScrapyDeprecationWarning: Module `scrapy.contrib.downloadermiddleware.useragent` is deprecated, use `scrapy.downloadermiddlewares.useragent` instead from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware2017-05-08 21:58:14 [scrapy.middleware] INFO: Enabled downloader middlewares:[’scrapy.downloadermiddlewares.retry.RetryMiddleware’, ’scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware’, ’craler.middlewares.RotateUserAgentMiddleware’, ’scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware’, ’scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware’, ’scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware’, ’scrapy.downloadermiddlewares.useragent.UserAgentMiddleware’, ’scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware’, ’scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware’, ’scrapy.downloadermiddlewares.redirect.RedirectMiddleware’, ’scrapy.downloadermiddlewares.stats.DownloaderStats’, ’scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware’]2017-05-08 21:58:15 [scrapy.middleware] INFO: Enabled spider middlewares:[’scrapy.spidermiddlewares.httperror.HttpErrorMiddleware’, ’scrapy.spidermiddlewares.offsite.OffsiteMiddleware’, ’scrapy.spidermiddlewares.referer.RefererMiddleware’, ’scrapy.spidermiddlewares.urllength.UrlLengthMiddleware’, ’scrapy.spidermiddlewares.depth.DepthMiddleware’]2017-05-08 21:58:15 [scrapy.middleware] INFO: Enabled item pipelines:[’craler.pipelines.CralerPipeline’]2017-05-08 21:58:15 [scrapy.core.engine] INFO: Spider opened2017-05-08 21:58:15 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2017-05-08 21:58:15 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:60232017-05-08 21:58:15 [root] INFO: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)2017-05-08 21:58:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://maoyan.com/robots.txt> (referer: None) [’cached’]2017-05-08 21:58:15 [root] INFO: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.502017-05-08 21:58:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://maoyan.com/films> (referer: None) [’cached’]2017-05-08 21:58:15 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to ’maoyan.com’: <GET http://maoyan.com/films/248683>2017-05-08 21:58:15 [scrapy.core.engine] INFO: Closing spider (finished)2017-05-08 21:58:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{’downloader/request_bytes’: 534, ’downloader/request_count’: 2, ’downloader/request_method_count/GET’: 2, ’downloader/response_bytes’: 6913, ’downloader/response_count’: 2, ’downloader/response_status_count/200’: 2, ’finish_reason’: ’finished’, ’finish_time’: datetime.datetime(2017, 5, 8, 13, 58, 15, 357000), ’httpcache/hit’: 2, ’log_count/DEBUG’: 4, ’log_count/INFO’: 9, ’log_count/WARNING’: 1, ’offsite/domains’: 1, ’offsite/filtered’: 30, ’request_depth_max’: 1, ’response_received_count’: 2, ’scheduler/dequeued’: 1, ’scheduler/dequeued/memory’: 1, ’scheduler/enqueued’: 1, ’scheduler/enqueued/memory’: 1, ’start_time’: datetime.datetime(2017, 5, 8, 13, 58, 15, 140000)}2017-05-08 21:58:15 [scrapy.core.engine] INFO: Spider closed (finished)Process finished with exit code 0

問(wèn)題解答

回答1:

主要是 allow_domains的問(wèn)題,你的提取規(guī)則是沒(méi)問(wèn)題的,代碼這樣寫(xiě)就能抓鏈接了

# encoding: utf-8import timefrom tutorial.items import CrawlerItemfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import LinkExtractorclass MoyanSpider(CrawlSpider): name = ’maoyan’ allowed_domains = ['maoyan.com'] start_urls = [’http://maoyan.com/films’] rules = (Rule(LinkExtractor(allow=(r'films/d+.*')), callback=’parse_item’, follow=True), ) def parse_item(self, response):print(response.url)item = CrawlerItem()try: time.sleep(2) item[’name’] = response.text.find('p', class_='movie-brief-container').find('h3', class_='name').get_text() item[’score’] = response.text.find('p', class_='movie-index-content score normal-score').find('span', class_='stonefont').get_text() url = 'http://maoyan.com' + response.text.find('p', class_='channel-detail movie-item-title').find('a')['href'] item[’id’] = response.url.split('/')[-1] temp = response.text.find('p', 'movie-brief-container').find('ul').get_text() temp = temp.split(’n’) item[’tags’] = temp[1] item[’countries’] = temp[3].strip() item[’duration’] = temp[4].split(’/’)[-1] item[’time’] = temp[6] return itemexcept Exception as e: print(e)

主要就是allow_domain別帶上http://字符串。

另外,你的解析模塊有點(diǎn)問(wèn)題,我沒(méi)給你修改,有數(shù)據(jù)了自己應(yīng)該也能改。

另外,吐槽一下前面的同學(xué),根本就沒(méi)調(diào)試人家的代碼,也這樣強(qiáng)答,明顯在誤導(dǎo)人嘛

回答2:

有幾個(gè)模塊組件已經(jīng)棄用了,讓你換個(gè)別的相似模塊使用

回答3:

只是警告,沒(méi)有錯(cuò)誤。可能你爬取的網(wǎng)站做了防爬蟲(chóng)措施,導(dǎo)致你無(wú)法正常獲取。

標(biāo)簽: Python 編程
主站蜘蛛池模板: 久青草视频在线 | 成人日韩在线观看 | 911香蕉视频 | 看片在线观看免费 | 久久蜜月 | 手机在线看黄 | 国产青青久久 | 亚洲第三十七页 | 91在线精品麻豆欧美在线 | 欧美精品一二三区 | 一级黄色在线 | 欧美黑人白人bbw | 久久精品视频99精品视频150 | 久久毛片免费看 | 久久精品国产国产精品四凭 | 欧美精品播放 | 天天看天天摸色天天综合网 | 欧洲色吧 | 日本欧美韩国一区二区三区 | 中文字幕片 | 久久久综合视频 | 91四虎国自产在线播放线 | 久久综合九色综合亚洲小说 | 国产限制级在线观看 | 日韩www视频 | 久久久7777888精品 | 久久国产精品一区二区 | 免费一级片网站 | 手机看片日韩在线 | 亚洲区欧美 | 久久精品黄色 | 国产网站免费在线观看 | 国产成人三级视频在线观看播放 | 大片免费看费看大片 | 亚洲精品国产网红在线一区 | 青青草手机在线观看 | 韩国一级毛片视频免费观看 | www亚洲成人 | 农村妇女色又黄一级毛片不卡 | 性潮高免费视频免费看 | 国产福利视频精品 |