分别对应的知识点为:
1.爬出一个页面下的基础数据.
2.通过爬到的数据进行二次爬取.
3.通过循环对网页进行所有数据的爬取.
话不多说,现在开干.
通过对新闻栏目的源代码分析,我们发现所抓数据的结构为
那么我们只需要将爬虫的选择器定位到(li:newsinfo_box_cf),再进行for循环抓取即可.
import scrapyclass News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): url = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())
测试,通过!
现在我获得了一组URL,现在我需要进入到每一个URL中抓取我所需要的标题,时间和内容,代码实现也挺简单,只需要在原有代码抓到一个URL时进入该URL并且抓取相应的数据即可.所以,我只需要再写一个进入新闻详情页的抓取方法,并且使用scapy.request调用即可.
#进入新闻详情页的抓取方法 def parse_dir_contents(self, response):item = GgglxyItem()item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first()item['href'] = responseitem['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']")item['content'] = data[0].xpath('string(.)').extract()[0] yield item
整合进原有代码后,有:
import scrapyfrom ggglxy.items import GgglxyItemclass News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): url = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())#调用新闻抓取方法yield scrapy.Request(url, callback=self.parse_dir_contents)#进入新闻详情页的抓取方法 def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0]yield item
测试,通过!
这时我们加一个循环:
NEXT_PAGE_NUM = 1 NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11:next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUM yield scrapy.Request(next_url, callback=self.parse)
加入到原本代码:
import scrapyfrom ggglxy.items import GgglxyItem NEXT_PAGE_NUM = 1class News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): URL = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())yield scrapy.Request(URL, callback=self.parse_dir_contents)global NEXT_PAGE_NUM NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11: next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUMyield scrapy.Request(next_url, callback=self.parse) def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0] yield item
测试:
抓到的数量为191,但是我们看官网发现有193条新闻,少了两条.
为啥呢?我们注意到log的error有两条:
定位问题:原来发现,学院的新闻栏目还有两条隐藏的二级栏目:
比如:
对应的URL为
URL都长的不一样,难怪抓不到了!
那么我们还得为这两条二级栏目的URL设定专门的规则,只需要加入判断是否为二级栏目:
if URL.find('type') != -1: yield scrapy.Request(URL, callback=self.parse)
组装原函数:
import scrapy from ggglxy.items import GgglxyItem NEXT_PAGE_NUM = 1class News2Spider(scrapy.Spider): name = "news_info_2" start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ]def parse(self, response):for href in response.xpath("//div[@class='newsinfo_box cf']"): URL = response.urljoin(href.xpath("div[@class='news_c fr']/h3/a/@href").extract_first())if URL.find('type') != -1:yield scrapy.Request(URL, callback=self.parse)yield scrapy.Request(URL, callback=self.parse_dir_contents) global NEXT_PAGE_NUM NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1if NEXT_PAGE_NUM<11: next_url = 'http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s' % NEXT_PAGE_NUMyield scrapy.Request(next_url, callback=self.parse) def parse_dir_contents(self, response): item = GgglxyItem() item['date'] = response.xpath("//div[@class='detail_zy_title']/p/text()").extract_first() item['href'] = response item['title'] = response.xpath("//div[@class='detail_zy_title']/h1/text()").extract_first() data = response.xpath("//div[@class='detail_zy_c pb30 mb30']") item['content'] = data[0].xpath('string(.)').extract()[0] yield item
测试:
我们发现,抓取的数据由以前的193条增加到了238条,log里面也没有error了,说明我们的抓取规则OK!
scrapy crawl news_info_2 -o 0016.json
声明:本网页内容旨在传播知识,若有侵权等问题请及时与本网联系,我们将在第一时间删除处理。TEL:177 7030 7066 E-MAIL:11247931@qq.com