最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
问答文章1 问答文章501 问答文章1001 问答文章1501 问答文章2001 问答文章2501 问答文章3001 问答文章3501 问答文章4001 问答文章4501 问答文章5001 问答文章5501 问答文章6001 问答文章6501 问答文章7001 问答文章7501 问答文章8001 问答文章8501 问答文章9001 问答文章9501
当前位置: 首页 - 科技 - 知识百科 - 正文

python爬取指定url的ICP备案信息

来源:懂视网 责编:小采 时间:2020-11-27 14:28:26
文档

python爬取指定url的ICP备案信息

python爬取指定url的ICP备案信息:#coding=gbk import os import sys import re import time import urllib2 def perror_and_exit(message, status = -1): sys.stderr.write(message + '\n') sys.exit(status) def get_text_from_html_tag(html): pattern_text = re.compile(r
推荐度:
导读python爬取指定url的ICP备案信息:#coding=gbk import os import sys import re import time import urllib2 def perror_and_exit(message, status = -1): sys.stderr.write(message + '\n') sys.exit(status) def get_text_from_html_tag(html): pattern_text = re.compile(r

#coding=gbk 
import os
import sys
import re
import time
import urllib2
 
def perror_and_exit(message, status = -1):
 sys.stderr.write(message + '
')
 sys.exit(status)
 
def get_text_from_html_tag(html):
 pattern_text = re.compile(r">.*? return pattern_text.findall(html)[0][1:-2].strip()
 
def parse_alexa(url):
 url_alexa = "http://icp.alexa.cn/index.php?q=%s" % url
 print url_alexa
 #handle exception 
 times = 0
 while times < 5000: #等待有一定次数限制 
 try:
 alexa = urllib2.urlopen(url_alexa).read()
 
 pattern_table = re.compile(r".*?", re.DOTALL | re.MULTILINE)
 match_table = pattern_table.search(alexa)
 if not match_table:
 raise BaseException("No table in HTML")
 break
 except:
 print "try %s times:sleep %s seconds" % (times, 2**times)
 times += 1
 time.sleep(2**times)
 continue
 
 table = match_table.group()
 pattern_tr = re.compile(r".*?", re.DOTALL | re.MULTILINE)
 match_tr = pattern_tr.findall(table)
 if len(match_tr) != 2:
 perror_and_exit("table format is incorrect")
 
 icp_tr = match_tr[1]
 pattern_td = re.compile(r".*?", re.DOTALL | re.MULTILINE)
 match_td = pattern_td.findall(icp_tr)
 
 #print match_td 
 company_name = get_text_from_html_tag(match_td[1])
 company_properties = get_text_from_html_tag(match_td[2])
 company_icp = get_text_from_html_tag(match_td[3])
 company_icp = company_icp[company_icp.find(">") + 1:]
 company_website_name = get_text_from_html_tag(match_td[4])
 company_website_home_page = get_text_from_html_tag(match_td[5])
 company_website_home_page = company_website_home_page[company_website_home_page.rfind(">") + 1:]
 company_detail_url = get_text_from_html_tag(match_td[7])
 pattern_href = re.compile(r"href=".*?"", re.DOTALL | re.MULTILINE)
 match_href = pattern_href.findall(company_detail_url)
 if len(match_href) == 0:
 company_detail_url = ""
 else:
 company_detail_url = match_href[0][len("href=""):-1]
 return [url, company_name, company_properties, company_icp, company_website_name, company_website_home_page, company_detail_url]
 pass
 
if __name__ == "__main__":
 fw = file("out.txt", "w")
 for url in sys.stdin:
 fw.write("	".join(parse_alexa(url)) + "
")
 
#coding=gbk
import os
import sys
import re
import time
import urllib2
 
def perror_and_exit(message, status = -1):
 sys.stderr.write(message + '
')
 sys.exit(status)
 
def get_text_from_html_tag(html):
 pattern_text = re.compile(r">.*? return pattern_text.findall(html)[0][1:-2].strip()
 
def parse_alexa(url):
 url_alexa = "http://icp.alexa.cn/index.php?q=%s" % url
 print url_alexa
 #handle exception
 times = 0
 while times < 5000: #等待有一定次数限制
 try:
 alexa = urllib2.urlopen(url_alexa).read()
 
 pattern_table = re.compile(r".*?", re.DOTALL | re.MULTILINE)
 match_table = pattern_table.search(alexa)
 if not match_table:
 raise BaseException("No table in HTML")
 break
 except:
 print "try %s times:sleep %s seconds" % (times, 2**times)
 times += 1
 time.sleep(2**times)
 continue
 
 table = match_table.group()
 pattern_tr = re.compile(r".*?", re.DOTALL | re.MULTILINE)
 match_tr = pattern_tr.findall(table)
 if len(match_tr) != 2:
 perror_and_exit("table format is incorrect")
 
 icp_tr = match_tr[1]
 pattern_td = re.compile(r".*?", re.DOTALL | re.MULTILINE)
 match_td = pattern_td.findall(icp_tr)
 
 #print match_td
 company_name = get_text_from_html_tag(match_td[1])
 company_properties = get_text_from_html_tag(match_td[2])
 company_icp = get_text_from_html_tag(match_td[3])
 company_icp = company_icp[company_icp.find(">") + 1:]
 company_website_name = get_text_from_html_tag(match_td[4])
 company_website_home_page = get_text_from_html_tag(match_td[5])
 company_website_home_page = company_website_home_page[company_website_home_page.rfind(">") + 1:]
 company_detail_url = get_text_from_html_tag(match_td[7])
 pattern_href = re.compile(r"href=".*?"", re.DOTALL | re.MULTILINE)
 match_href = pattern_href.findall(company_detail_url)
 if len(match_href) == 0:
 company_detail_url = ""
 else:
 company_detail_url = match_href[0][len("href=""):-1]
 return [url, company_name, company_properties, company_icp, company_website_name, company_website_home_page, company_detail_url]
 pass
 
if __name__ == "__main__":
 fw = file("out.txt", "w")
 for url in sys.stdin:
 fw.write("	".join(parse_alexa(url)) + "
")[python] view plaincopyprint? time.sleep(2)
 pass
 
 time.sleep(2)
 pass

每次抓取都会sleep 2s,防止ip被封,实际上即使sleep了IP过一段时间还是会被封

由于是结构化抓取,当网站格式变化此程序将无法使用

声明:本网页内容旨在传播知识,若有侵权等问题请及时与本网联系,我们将在第一时间删除处理。TEL:177 7030 7066 E-MAIL:11247931@qq.com

文档

python爬取指定url的ICP备案信息

python爬取指定url的ICP备案信息:#coding=gbk import os import sys import re import time import urllib2 def perror_and_exit(message, status = -1): sys.stderr.write(message + '\n') sys.exit(status) def get_text_from_html_tag(html): pattern_text = re.compile(r
推荐度:
标签: 备案 指定的 python
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top