亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

Python爬蟲采集微博視頻數(shù)據(jù)

 更新時間:2021年12月03日 16:00:42   作者:松鼠愛吃餅干  
這篇文章主要介紹了利用Python爬蟲采集微博的視頻數(shù)據(jù),文中有非常詳細(xì)的代碼示例,對正在學(xué)python的小伙伴們有很好地幫助,需要的朋友可以參考下

前言

隨時隨地發(fā)現(xiàn)新鮮事!微博帶你欣賞世界上每一個精彩瞬間,了解每一個幕后故事。分享你想表達(dá)的,讓全世界都能聽到你的心聲!今天我們通過python去采集微博當(dāng)中好看的視頻!

沒錯,今天的目標(biāo)是微博數(shù)據(jù)采集,爬的是那些好看的小姐姐視頻

知識點(diǎn)

requests

pprint

開發(fā)環(huán)境

版 本:python 3.8

-編輯器:pycharm 2021.2

爬蟲原理

作用:批量獲取互聯(lián)網(wǎng)數(shù)據(jù)(文本, 圖片, 音頻, 視頻)

本質(zhì):一次次的請求與響應(yīng)

?案例實現(xiàn)

1. 導(dǎo)入所需模塊

import requests
import pprint

2. 找到目標(biāo)網(wǎng)址

打開開發(fā)者工具,選中Fetch/XHR,選中數(shù)據(jù)所在的標(biāo)簽,找到目標(biāo)所在url

https://www.weibo.com/tv/api/component?page=/tv/channel/4379160563414111/editor

3. 發(fā)送網(wǎng)絡(luò)請求

headers = {
    'cookie': '',
    'referer': 'https://weibo.com/tv/channel/4379160563414111/editor',
    'user-agent': '',
}
data = {
    'data': '{"Component_Channel_Editor":{"cid":"4379160563414111","count":9}}'
}
url = 'https://www.weibo.com/tv/api/component?page=/tv/channel/4379160563414111/editor'
json_data = requests.post(url=url, headers=headers, data=data).json()

4. 獲取數(shù)據(jù)

json_data_2 = requests.post(url=url_1, headers=headers, data=data_1).json()

5. 篩選數(shù)據(jù)

dict_urls = json_data_2['data']['Component_Play_Playinfo']['urls']
video_url = "https:" + dict_urls[list(dict_urls.keys())[0]]
print(title + "\t" + video_url)

6. 保存數(shù)據(jù)

video_data = requests.get(video_url).content
with open(f'video\\{title}.mp4', mode='wb') as f:
    f.write(video_data)
print(title, "爬取成功................")

完整代碼

import requests
import pprint

headers = {
    'cookie': '添加自己的',
    'referer': 'https://weibo.com/tv/channel/4379160563414111/editor',
    'user-agent': '',
}
data = {
    'data': '{"Component_Channel_Editor":{"cid":"4379160563414111","count":9}}'
}
url = 'https://www.weibo.com/tv/api/component?page=/tv/channel/4379160563414111/editor'
json_data = requests.post(url=url, headers=headers, data=data).json()
print(json_data)

ccs_list = json_data['data']['Component_Channel_Editor']['list']
next_cursor = json_data['data']['Component_Channel_Editor']['next_cursor']
for ccs in ccs_list:
    oid = ccs['oid']
    title = ccs['title']
    data_1 = {
        'data': '{"Component_Play_Playinfo":{"oid":"' + oid + '"}}'
    }
    url_1 = 'https://weibo.com/tv/api/component?page=/tv/show/' + oid
    json_data_2 = requests.post(url=url_1, headers=headers, data=data_1).json()
    dict_urls = json_data_2['data']['Component_Play_Playinfo']['urls']
    video_url = "https:" + dict_urls[list(dict_urls.keys())[0]]
    print(title + "\t" + video_url)

    video_data = requests.get(video_url).content
    with open(f'video\\{title}.mp4', mode='wb') as f:
        f.write(video_data)
    print(title, "爬取成功................")

?以上就是Python爬蟲采集微博視頻數(shù)據(jù)的詳細(xì)內(nèi)容,更多關(guān)于Python采集視頻數(shù)據(jù)的資料請關(guān)注腳本之家其它相關(guān)文章!

相關(guān)文章

最新評論