分別對應的知識點為:
1.爬出一個頁面下的基礎數據.
2.通過爬到的數據進行二次爬取.
3.通過循環對網頁進行所有數據的爬取.
話不多說,現在開干.
通過對新聞欄目的源代碼分析,我們發現所抓數據的結構為
那么我們只需要將爬蟲的選擇器定位到(li:newsinfo_box_cf),再進行for循環抓取即可.
import scrapyclass News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): url = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())
測試,通過!
現在我獲得了一組URL,現在我需要進入到每一個URL中抓取我所需要的標題,時間和內容,代碼實現也挺簡單,只需要在原有代碼抓到一個URL時進入該URL并且抓取相應的數據即可.所以,我只需要再寫一個進入新聞詳情頁的抓取方法,并且使用scapy.request調用即可.
#進入新聞詳情頁的抓取方法 def parse_dir_contents(self, response):item = GgglxyItem()item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first()item['href'] = responseitem['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']")item['content'] = data[0].xpath('string(.)').extract()[0] yield item
整合進原有代碼后,有:
import scrapyfrom ggglxy.items import GgglxyItemclass News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): url = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())#調用新聞抓取方法yield scrapy.Request(url, callback=self.parse_dir_contents)#進入新聞詳情頁的抓取方法 def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0]yield item
測試,通過!
這時我們加一個循環:
NEXT_PAGE_NUM = 1 NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11:next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUM yield scrapy.Request(next_url, callback=self.parse)
加入到原本代碼:
import scrapyfrom ggglxy.items import GgglxyItem NEXT_PAGE_NUM = 1class News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): URL = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())yield scrapy.Request(URL, callback=self.parse_dir_contents)global NEXT_PAGE_NUM NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11: next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUMyield scrapy.Request(next_url, callback=self.parse) def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0] yield item
測試:
抓到的數量為191,但是我們看官網發現有193條新聞,少了兩條.
為啥呢?我們注意到log的error有兩條:
定位問題:原來發現,學院的新聞欄目還有兩條隱藏的二級欄目:
比如:
對應的URL為
URL都長的不一樣,難怪抓不到了!
那么我們還得為這兩條二級欄目的URL設定專門的規則,只需要加入判斷是否為二級欄目:
if URL.find('type') != -1: yield scrapy.Request(URL, callback=self.parse)
組裝原函數:
import scrapy from ggglxy.items import GgglxyItem NEXT_PAGE_NUM = 1class News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): URL = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())if URL.find('type') != -1:yield scrapy.Request(URL, callback=self.parse)yield scrapy.Request(URL, callback=self.parse_dir_contents) global NEXT_PAGE_NUM NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11: next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUMyield scrapy.Request(next_url, callback=self.parse) def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0] yield item
測試:
我們發現,抓取的數據由以前的193條增加到了238條,log里面也沒有error了,說明我們的抓取規則OK!
scrapy crawl news_info_2 -o 0016.json
聲明:本網頁內容旨在傳播知識,若有侵權等問題請及時與本網聯系,我們將在第一時間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com