亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

使用Python編寫(xiě)爬蟲(chóng)的基本模塊及框架使用指南

 更新時(shí)間:2016年01月20日 14:58:06   作者:明天以后  
這篇文章主要介紹了使用Python編寫(xiě)爬蟲(chóng)的基本模塊及框架使用指南,模塊介紹包括了urllib和urllib2以及re的使用例子框架則是Scrapy的簡(jiǎn)介,需要的朋友可以參考下

基本模塊
 python爬蟲(chóng),web spider。爬取網(wǎng)站獲取網(wǎng)頁(yè)數(shù)據(jù),并進(jìn)行分析提取。

基本模塊使用的是 urllib,urllib2,re,等模塊

基本用法,例子:

(1)進(jìn)行基本GET請(qǐng)求,獲取網(wǎng)頁(yè)html

#!coding=utf-8
import urllib
import urllib2
 
url = 'http://www.baidu.com/'
# 獲取請(qǐng)求
request = urllib2.Request(url)
try:
  # 根據(jù)request,得到返回response
  response = urllib2.urlopen(request)
except urllib2.HTTPError, e:
  if hasattr(e, 'reason'):
    print e.reason
# 讀取response的body
html = response.read()
# 讀取response的headers
headers = response.info()

   
(2)表單提交

#!coding=utf-8
import urllib2
import urllib
 
post_url = ''
 
post_data = urllib.urlencode({
  'username': 'username',
  'password': 'password',
})
 
post_headers = {
  'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:31.0) Gecko/20100101 Firefox/31.0',
}
 
request = urllib2.Request(
  url=post_url,
  data=post_data,
  headers=post_headers,
)
 
response = urllib2.urlopen(request)
 
html = response.read()

(3)

#!coding=utf-8
 
import urllib2
import re
 
page_num = 1
url = 'http://tieba.baidu.com/p/3238280985?see_lz=1&pn='+str(page_num)
myPage = urllib2.urlopen(url).read().decode('gbk')
 
myRe = re.compile(r'class="d_post_content j_d_post_content ">(.*?)</div>', re.DOTALL)
items = myRe.findall(myPage)
 
f = open('baidu.txt', 'a+')
 
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
 
i = 0
texts = []
for item in items:
  i += 1
  print i
  text = item.replace('<br>', '')
  text.replace('\n', '').replace(' ', '') + '\n'
  print text
  f.write(text)
 
f.close()

(4)

#coding:utf-8
'''
  模擬登陸163郵箱并下載郵件內(nèi)容
 
'''
import urllib
import urllib2
import cookielib
import re
import time
import json
 
class Email163:
  header = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
  user = ''
  cookie = None
  sid = None
  mailBaseUrl='http://twebmail.mail.163.com'
 
  def __init__(self):
    self.cookie = cookielib.CookieJar()
    cookiePro = urllib2.HTTPCookieProcessor(self.cookie)
    urllib2.install_opener(urllib2.build_opener(cookiePro))
 
  def login(self,user,pwd):
    '''
      登錄
    '''
    postdata = urllib.urlencode({
        'username':user,
        'password':pwd,
        'type':1
      })
    #注意版本不同,登錄URL也不同
    req = urllib2.Request(
        url='https://ssl.mail.163.com/entry/coremail/fcg/ntesdoor2?funcid=loginone&language=-1&passtype=1&iframe=1&product=mail163&from=web&df=email163&race=-2_45_-2_hz&module=&uid='+user+'&style=10&net=t&skinid=null',
        data=postdata,
        headers=self.header,
      )
    res = str(urllib2.urlopen(req).read())
    #print res
    patt = re.compile('sid=([^"]+)',re.I)
    patt = patt.search(res)
 
    uname = user.split('@')[0]
    self.user = user
    if patt:
      self.sid = patt.group(1).strip()
      #print self.sid
      print '%s Login Successful.....'%(uname)
    else:
      print '%s Login failed....'%(uname)
 
 
  def getInBox(self):
    '''
      獲取郵箱列表
    '''
    print '\nGet mail lists.....\n'
    sid = self.sid
    url = self.mailBaseUrl+'/jy3/list/list.do?sid='+sid+'&fid=1&fr=folder'
    res = urllib2.urlopen(url).read()
    #獲取郵件列表
    mailList = []
    patt = re.compile('<div\s+class="tdLike Ibx_Td_From"[^>]+>.*?href="([^"]+)"[^>]+>(.*?)<\/a>.*?<div\s+class="tdLike Ibx_Td_Subject"[^>]+>.*?href="[^>]+>(.*?)<\/a>',re.I|re.S)
    patt = patt.findall(res)
    if patt==None:
      return mailList
 
    for i in patt:
      line = {
          'from':i[1].decode('utf8'),
           'url':self.mailBaseUrl+i[0],
           'subject':i[2].decode('utf8')
           }
      mailList.append(line)
 
    return mailList
 
 
  def getMailMsg(self,url):
    '''
      下載郵件內(nèi)容
    '''
    content=''
    print '\n Download.....%s\n'%(url)
    res = urllib2.urlopen(url).read()
 
    patt = re.compile('contentURL:"([^"]+)"',re.I)
    patt = patt.search(res)
    if patt==None:
      return content
    url = '%s%s'%(self.mailBaseUrl,patt.group(1))
    time.sleep(1)
    res = urllib2.urlopen(url).read()
    Djson = json.JSONDecoder(encoding='utf8')
    jsonRes = Djson.decode(res)
    if 'resultVar' in jsonRes:
      content = Djson.decode(res)['resultVar']
    time.sleep(3)
    return content
 
 
'''
  Demon
'''
#初始化
mail163 = Email163()
#登錄
mail163.login('lpe234@163.com','944898186')
time.sleep(2)
 
#獲取收件箱
elist = mail163.getInBox()
 
#獲取郵件內(nèi)容
for i in elist:
  print '主題:%s  來(lái)自:%s 內(nèi)容:\n%s'%(i['subject'].encode('utf8'),i['from'].encode('utf8'),mail163.getMailMsg(i['url']).encode('utf8'))

(5)需要登陸的情況

#1 cookie的處理
 
import urllib2, cookielib
cookie_support= urllib2.HTTPCookieProcessor(cookielib.CookieJar())
opener = urllib2.build_opener(cookie_support, urllib2.HTTPHandler)
urllib2.install_opener(opener)
content = urllib2.urlopen('http://XXXX').read()
 
#2 用代理和cookie
 
opener = urllib2.build_opener(proxy_support, cookie_support, urllib2.HTTPHandler)
 
#3 表單的處理
 
import urllib
postdata=urllib.urlencode({
  'username':'XXXXX',
  'password':'XXXXX',
  'continueURI':'http://www.verycd.com/',
  'fk':fk,
  'login_submit':'登錄'
})
 
req = urllib2.Request(
  url = 'http://secure.verycd.com/signin/*/http://www.verycd.com/',
  data = postdata
)
result = urllib2.urlopen(req).read()
 
#4 偽裝成瀏覽器訪(fǎng)問(wèn)
 
headers = {
  'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'
}
req = urllib2.Request(
  url = 'http://secure.verycd.com/signin/*/http://www.verycd.com/',
  data = postdata,
  headers = headers
)
 
#5 反”反盜鏈”
 
headers = {
  'Referer':'http://www.cnbeta.com/articles'
}

(6)多線(xiàn)程

from threading import Thread
from Queue import Queue
from time import sleep
#q是任務(wù)隊(duì)列
#NUM是并發(fā)線(xiàn)程總數(shù)
#JOBS是有多少任務(wù)
q = Queue()
NUM = 2
JOBS = 10
#具體的處理函數(shù),負(fù)責(zé)處理單個(gè)任務(wù)
def do_somthing_using(arguments):
  print arguments
#這個(gè)是工作進(jìn)程,負(fù)責(zé)不斷從隊(duì)列取數(shù)據(jù)并處理
def working():
  while True:
    arguments = q.get()
    do_somthing_using(arguments)
    sleep(1)
    q.task_done()
#fork NUM個(gè)線(xiàn)程等待隊(duì)列
for i in range(NUM):
  t = Thread(target=working)
  t.setDaemon(True)
  t.start()
#把JOBS排入隊(duì)列
for i in range(JOBS):
  q.put(i)
#等待所有JOBS完成
q.join()

scrapy框架
  Scrapy框架,Python開(kāi)發(fā)的一個(gè)快速,高層次的屏幕抓取和web抓取框架,用于抓取web站點(diǎn)并從頁(yè)面中提取結(jié)構(gòu)化的數(shù)據(jù)。Scrapy用途廣泛,可以用于數(shù)據(jù)挖掘、監(jiān)測(cè)和自動(dòng)化測(cè)試。

    剛開(kāi)始學(xué)習(xí)這個(gè)框架。不太好評(píng)論。只是感覺(jué)這個(gè)框架有些Java的感覺(jué),需要太多的其他模塊的支持。

(一)創(chuàng)建 scrapy 項(xiàng)目

# 使用 scrapy startproject scrapy_test
├── scrapy_test
│  ├── scrapy.cfg
│  └── scrapy_test
│    ├── __init__.py
│    ├── items.py
│    ├── pipelines.py
│    ├── settings.py
│    └── spiders
│      ├── __init__.py
# 進(jìn)行創(chuàng)建 scrapy 項(xiàng)目

(二)說(shuō)明

scrapy.cfg: 項(xiàng)目配置文件
items.py: 需要提取的數(shù)據(jù)結(jié)構(gòu)定義文件
pipelines.py:管道定義,用來(lái)對(duì)items里面提取的數(shù)據(jù)做進(jìn)一步處理,如保存等
settings.py: 爬蟲(chóng)配置文件
spiders: 放置spider的目錄
(三)依賴(lài)包

    依賴(lài)包比較麻煩。

# python-dev 包的安裝
apt-get install python-dev
 
# twisted, w3lib, six, queuelib, cssselect, libxslt
 
pip install w3lib
pip install twisted
pip install lxml
apt-get install libxml2-dev libxslt-dev 
apt-get install python-lxml
pip install cssselect 
pip install pyOpenSSL 
sudo pip install service_identity
 
# 安裝好之后,便可使用 scrapy startproject test 進(jìn)行創(chuàng)建項(xiàng)目

(四)抓取實(shí)例。
(1)創(chuàng)建scrapy項(xiàng)目

dizzy@dizzy-pc:~/Python/spit$ scrapy startproject itzhaopin
New Scrapy project 'itzhaopin' created in:
  /home/dizzy/Python/spit/itzhaopin
 
You can start your first spider with:
  cd itzhaopin
  scrapy genspider example example.com
dizzy@dizzy-pc:~/Python/spit$ 
 
dizzy@dizzy-pc:~/Python/spit$ cd itzhaopin
dizzy@dizzy-pc:~/Python/spit/itzhaopin$ tree
.
├── itzhaopin
│  ├── __init__.py
│  ├── items.py
│  ├── pipelines.py
│  ├── settings.py
│  └── spiders
│    └── __init__.py
└── scrapy.cfg
 
# scrapy.cfg: 項(xiàng)http://my.oschina.net/lpe234/admin/new-blog目配置文件
# items.py: 需要提取的數(shù)據(jù)結(jié)構(gòu)定義文件
# pipelines.py:管道定義,用來(lái)對(duì)items里面提取的數(shù)據(jù)做進(jìn)一步處理,如保存等
# settings.py: 爬蟲(chóng)配置文件
# spiders: 放置spider的目錄

        (2)定義要抓取的數(shù)據(jù)結(jié)構(gòu) items.py

from scrapy.item import Item, Field
# 定義我們要抓取的數(shù)據(jù)
class TencentItem(Item):
  name = Field() # 職位名稱(chēng)
  catalog = Field() # 職位類(lèi)別
  workLocation = Field() # 工作地點(diǎn)
  recruitNumber = Field() # 招聘人數(shù)
  detailLink = Field() # 職位詳情鏈接
  publishTime = Field() # 發(fā)布時(shí)間

 (3)實(shí)現(xiàn)Spider類(lèi)

  •  Spider是繼承自 scarpy.contrib.spiders.CrawlSpider 的Python類(lèi),有3個(gè)必須定義的成員。
  •  name : 名稱(chēng),spider的標(biāo)識(shí)。
  • start_urls :  一個(gè)url列表,spider從這些網(wǎng)頁(yè)開(kāi)始抓取
  • parse() : 一個(gè)方法。當(dāng)start_urls里面的網(wǎng)頁(yè)抓取下來(lái)之后需要調(diào)用這個(gè)方法來(lái)解析網(wǎng)頁(yè)內(nèi)容,同時(shí)需要返回下一個(gè)需要抓取的網(wǎng)頁(yè),或者返回items列表。

        在spiders目錄下面新建一個(gè)spider,tencent_spider.py :

#coding=utf-8
 
from scrapy.spider import BaseSpider
 
 
class DmozSpider(BaseSpider):
  name = 'dmoz'
  allowed_domains = ['dmoz.org']
  start_urls = [
    'http://www.dmoz.org/Computers/Programming/Languages/Python/Books/',
    'http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/'
  ]
 
  def parse(self, response):
    filename = response.url.split('/')[-2]
    open(filename, 'wb').write(response.info)

 這個(gè)簡(jiǎn)單一些。 使用scrapy crawl dmoz # 即可運(yùn)行spider

相關(guān)文章

最新評(píng)論