餅圖常用於統計學模塊,畫餅圖用到的方法為:pie( ) 一、pie()函數用來繪製餅圖 pie(x, explode=None, labels=None, colors=None, autopct=None, pctdistance=0.6, shadow=False, labeldistance= ...
CrawlSpider類是什麼?
- 是Spider的一個子類
- 區別:
- Spider是獲取到URL進行 手動發送請求 : yield scrapy.Request(url=new_url,callback=self.parse)
- 是通過提取器類:LinkExtractor,提前到頁面所有符合條件的URL,然後用Rule類對符合條件的URL自動發送請求
- 創建CrawlSpider爬蟲的命令:scrapy genspider -t crawl xxx(爬蟲名稱) www.xxxx.com(爬取的URL)
基於CrawlSpider創建的爬蟲類,代碼示例:
import scrapy
from scrapy.linkextractors import LinkExtractor #導入url提取器的類
from scrapy.spiders import CrawlSpider, Rule #Rule可用於自動發送請求
class XuexiSpider(CrawlSpider):
name = 'xuexi'
allowed_domains = ['www.xxx.com']
start_urls = ['http://www.xxx.com/']
#LinkExtractor(allow=r'Items/') 該類的allow參數寫入正則匹配規則,就會按照正則去響應信息中匹配URL,當然也有別的匹配規則,比如CSS
rules = (
#follow為True可以自動將所有響應信息的符合的規則的url都獲取到,併發送請求
Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
)
def parse_item(self, response):
item = {}
#item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get()
#item['name'] = response.xpath('//div[@id="name"]').get()
#item['description'] = response.xpath('//div[@id="description"]').get()
return item
下麵也一個案例,就以爬取陽光信息網為例,代碼示例:
#1.爬蟲文件.py代碼示例:
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from sunPro.items import SunproItem,DetailItem
#需求:爬取sun網站中的編號,新聞標題,新聞內容,標號
class SunSpider(CrawlSpider):
name = 'sun'
# allowed_domains = ['www.xxx.com']
start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page=']
#鏈接提取器:根據指定規則(allow="正則")進行指定鏈接的提取
link = LinkExtractor(allow=r'type=4&page=\d+')
link_detail = LinkExtractor(allow=r'question/\d+/\d+\.shtml')
rules = (
#規則解析器:將鏈接提取器提取到的鏈接進行指定規則(callback)的解析操作
Rule(link, callback='parse_item', follow=True),
#follow=True:可以將鏈接提取器 繼續作用到 連接提取器提取到的鏈接 所對應的頁面中
Rule(link_detail,callback='parse_detail')
)
#http://wz.sun0769.com/html/question/201907/421001.shtml
#http://wz.sun0769.com/html/question/201907/420987.shtml
#解析新聞編號和新聞的標題
#如下兩個解析方法中是不可以實現請求傳參!
#如法將兩個解析方法解析的數據存儲到同一個item中,可以以此存儲到兩個item
def parse_item(self, response):
#註意:xpath表達式中不可以出現tbody標簽
tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr')
for tr in tr_list:
new_num = tr.xpath('./td[1]/text()').extract_first()
new_title = tr.xpath('./td[2]/a[2]/@title').extract_first()
item = SunproItem()
item['title'] = new_title
item['new_num'] = new_num
yield item
#解析新聞內容和新聞編號
def parse_detail(self,response):
new_id = response.xpath('/html/body/div[9]/table[1]//tr/td[2]/span[2]/text()').extract_first()
new_content = response.xpath('/html/body/div[9]/table[2]//tr[1]//text()').extract()
new_content = ''.join(new_content)
# print(new_id,new_content)
item = DetailItem()
item['content'] = new_content
item['new_id'] = new_id
yield item
#2.itmes.py代碼示例:
#因為是不同頁面的數據,又不能進行參數化,所有通過兩個item類,來接收不同頁面的解析數據
import scrapy
class SunproItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
new_num = scrapy.Field()
class DetailItem(scrapy.Item):
new_id = scrapy.Field()
content = scrapy.Field()
#3.pipeline.py代碼示例:
#根據不同item的名字,來判斷,數據來源於哪一個item
class SunproPipeline(object):
def process_item(self, item, spider):
#如何判定item的類型
#將數據寫入資料庫時,如何保證數據的一致性
if item.__class__.__name__ == 'DetailItem':
print(item['new_id'],item['content'])
pass
else:
print(item['new_num'],item['title'])
return item