使用Python抓取豆瓣影評(píng)數(shù)據(jù)的方法
抓取豆瓣影評(píng)評(píng)分
正常的抓取
分析請(qǐng)求的url
里面有用的也就是start和limit參數(shù),我嘗試過(guò)修改limit參數(shù),但是沒(méi)有效果,可以認(rèn)為是默認(rèn)的
start參數(shù)是用來(lái)設(shè)置從第幾條數(shù)據(jù)開(kāi)始查詢的
- 設(shè)計(jì)查詢列表,發(fā)現(xiàn)頁(yè)面中有url中的查詢部分,且指向下一個(gè)頁(yè)面

于是采用下面的代碼進(jìn)行判斷是否還有下一個(gè)頁(yè)面
if next_url:
visit_URL('https://movie.douban.com/subject/24753477/comments'+next_url)
- 用requests發(fā)送請(qǐng)求,beautifulsoup進(jìn)行網(wǎng)頁(yè)解析

把數(shù)據(jù)寫(xiě)入txt
import requests
from bs4 import BeautifulSoup
first_url = 'https://movie.douban.com/subject/26322642/comments?status=P'
# 請(qǐng)求頭部
headers = {
'Host':'movie.douban.com',
'Referer':'https://movie.douban.com/subject/24753477/?tag=%E7%83%AD%E9%97%A8&from=gaia_video',
'Upgrade-Insecure-Requests':'1',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
}
def visit_URL(url):
res = requests.get(url=url,headers=headers)
soup = BeautifulSoup(res.content,'html5lib')
div_comment = soup.find_all('div',class_='comment-item') # 找到所有的評(píng)論模塊
for com in div_comment:
username = com.find('div',class_='avatar').a['title']
comment_time = com.find('span',class_='comment-time')['title']
votes = com.find('span',class_='votes').get_text()
comment = com.p.get_text()
with open('1.txt','a',encoding='utf8') as file:
file.write('評(píng)論人:'+username+'\n')
file.write('評(píng)論時(shí)間:'+comment_time+'\n')
file.write('支持人數(shù):'+votes+'\n')
file.write('評(píng)論內(nèi)容:'+comment+'\n')
# 檢查是否有下一頁(yè)
next_url = soup.find('a',class_='next')
if next_url:
temp = next_url['href'].strip().split('&') # 獲取下一個(gè)url
next_url = ''.join(temp)
print(next_url)
# print(next_url)
if next_url:
visit_URL('https://movie.douban.com/subject/24753477/comments'+next_url)
if __name__ == '__main__':
visit_URL(first_url)
模仿移動(dòng)端
很多時(shí)候模仿移動(dòng)端獲得的頁(yè)面會(huì)比PC端的簡(jiǎn)單,更加容易解析,這次模擬移動(dòng)端,發(fā)現(xiàn)可以直接訪問(wèn)api獲取json格式的數(shù)據(jù),nice!

至于怎么模擬移動(dòng)端只需要將user-agent修改為移動(dòng)端的頭
useragents = [ "Mozilla/5.0 (iPhone; CPU iPhone OS 9_2 like Mac OS X) AppleWebKit/601.1 (KHTML, like Gecko) CriOS/47.0.2526.70 Mobile/13C71 Safari/601.1.46", "Mozilla/5.0 (Linux; U; Android 4.4.4; Nexus 5 Build/KTU84P) AppleWebkit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30", "Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0)"
怎么獲取這些頭部?用火狐的插件user-agent switcher
之后的操作就是解析json
import random
import requests
import json
import time
first_url = 'https://m.douban.com/rexxar/api/v2/tv/26322642/interests?count=20&order_by=hot&start=0&ck=dNhr&for_mobile=1'
url = 'https://m.douban.com/rexxar/api/v2/tv/26322642/interests'
# 移動(dòng)端頭部信息
useragents = [
"Mozilla/5.0 (iPhone; CPU iPhone OS 9_2 like Mac OS X) AppleWebKit/601.1 (KHTML, like Gecko) CriOS/47.0.2526.70 Mobile/13C71 Safari/601.1.46",
"Mozilla/5.0 (Linux; U; Android 4.4.4; Nexus 5 Build/KTU84P) AppleWebkit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0)"
]
def visit_URL(i):
print(">>>>>",i)
# 請(qǐng)求頭部
headers = {
'Host':'m.douban.com',
'Upgrade-Insecure-Requests':'1',
'User-Agent':random.choice(useragents)
}
params = {
'count':'50',
'order_by':'hot',
'start':str(i),
'for_mobile':'1',
'ck':'dNhr'
}
res = requests.get(url=url,headers=headers,params=params)
res_json = res.json()
interests = res_json['interests']
print(len(interests))
for item in interests:
with open('huge.txt','a',encoding='utf-8') as file:
if item['user']:
if item['user']['name']:
file.write('評(píng)論用戶:'+item['user']['name']+'\n')
else:
file.write('評(píng)論用戶:none\n')
if item['create_time']:
file.write('評(píng)論時(shí)間:'+item['create_time']+'\n')
else:
file.write('評(píng)論時(shí)間:none\n')
if item['comment']:
file.write('評(píng)論內(nèi)容:'+item['comment']+'\n')
else:
file.write('評(píng)論內(nèi)容:none\n')
if item['rating']:
if item['rating']['value']:
file.write('對(duì)電影的評(píng)分:'+str(item['rating']['value'])+'\n\n')
else:
file.write('對(duì)電影的評(píng)分:none\n')
if __name__ == '__main__':
for i in range(0,66891,20):
# time.sleep(2)
visit_URL(i)
總結(jié)
以上就是這篇文章的全部?jī)?nèi)容了,希望本文的內(nèi)容對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,謝謝大家對(duì)腳本之家的支持。如果你想了解更多相關(guān)內(nèi)容請(qǐng)查看下面相關(guān)鏈接
相關(guān)文章
Python基礎(chǔ)學(xué)習(xí)之奇異的GUI對(duì)話框
今天跨進(jìn)了GUI編程的園地,才發(fā)現(xiàn)python語(yǔ)言是這么的好玩,文中對(duì)GUI對(duì)話框作了非常詳細(xì)的介紹,對(duì)正在學(xué)習(xí)python的小伙伴們有很好的幫助,需要的朋友可以參考下2021-05-05
解析Pytorch中的torch.gather()函數(shù)
本文給大家介紹了Pytorch中的torch.gather()函數(shù),通過(guò)實(shí)例代碼給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友參考下吧2021-11-11
django執(zhí)行原生SQL查詢的實(shí)現(xiàn)
本文主要介紹了django執(zhí)行原生SQL查詢的實(shí)現(xiàn),主要有兩種方法實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2021-08-08
如何通過(guò)Python實(shí)現(xiàn)定時(shí)打卡小程序
這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)定時(shí)打卡小程序,文中示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2021-11-11

