使用Python爬蟲庫BeautifulSoup遍歷文檔樹并對(duì)標(biāo)簽進(jìn)行操作詳解
下面就是使用Python爬蟲庫BeautifulSoup對(duì)文檔樹進(jìn)行遍歷并對(duì)標(biāo)簽進(jìn)行操作的實(shí)例,都是最基礎(chǔ)的內(nèi)容
html_doc = """ <html><head><title>The Dormouse's story</title></head> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>, <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ from bs4 import BeautifulSoup soup = BeautifulSoup(html_doc,'lxml')
一、子節(jié)點(diǎn)
一個(gè)Tag可能包含多個(gè)字符串或者其他Tag,這些都是這個(gè)Tag的子節(jié)點(diǎn).BeautifulSoup提供了許多操作和遍歷子結(jié)點(diǎn)的屬性。
1.通過Tag的名字來獲得Tag
print(soup.head) print(soup.title)
<head><title>The Dormouse's story</title></head> <title>The Dormouse's story</title>
通過名字的方法只能獲得第一個(gè)Tag,如果要獲得所有的某種Tag可以使用find_all方法
soup.find_all('a')
[<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
2.contents屬性:將Tag的子節(jié)點(diǎn)通過列表的方式返回
head_tag = soup.head head_tag.contents
[<title>The Dormouse's story</title>]
title_tag = head_tag.contents[0] title_tag
<title>The Dormouse's story</title>
title_tag.contents
["The Dormouse's story"]
3.children:通過該屬性對(duì)子節(jié)點(diǎn)進(jìn)行循環(huán)
for child in title_tag.children: print(child)
The Dormouse's story
4.descendants: 不論是contents還是children都是返回直接子節(jié)點(diǎn),而descendants對(duì)所有tag的子孫節(jié)點(diǎn)進(jìn)行遞歸循環(huán)
for child in head_tag.children: print(child)
<title>The Dormouse's story</title>
for child in head_tag.descendants: print(child)
<title>The Dormouse's story</title> The Dormouse's story
5.string 如果tag只有一個(gè)NavigableString類型的子節(jié)點(diǎn),那么tag可以使用.string得到該子節(jié)點(diǎn)
title_tag.string
"The Dormouse's story"
如果一個(gè)tag只有一個(gè)子節(jié)點(diǎn),那么使用.string可以獲得其唯一子結(jié)點(diǎn)的NavigableString.
head_tag.string
"The Dormouse's story"
如果tag有多個(gè)子節(jié)點(diǎn),tag無法確定.string對(duì)應(yīng)的是那個(gè)子結(jié)點(diǎn)的內(nèi)容,故返回None
print(soup.html.string)
None
6.strings和stripped_strings
如果tag包含多個(gè)字符串,可以使用.strings循環(huán)獲取
for string in soup.strings: print(string)
The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...
.string輸出的內(nèi)容包含了許多空格和空行,使用strpped_strings去除這些空白內(nèi)容
for string in soup.stripped_strings: print(string)
The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...
二、父節(jié)點(diǎn)
1.parent:獲得某個(gè)元素的父節(jié)點(diǎn)
title_tag = soup.title title_tag.parent
<head><title>The Dormouse's story</title></head>
字符串也有父節(jié)點(diǎn)
title_tag.string.parent
<title>The Dormouse's story</title>
2.parents:遞歸的獲得所有父輩節(jié)點(diǎn)
link = soup.a for parent in link.parents: if parent is None: print(parent) else: print(parent.name)
p body html [document]
三、兄弟結(jié)點(diǎn)
sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>",'lxml') print(sibling_soup.prettify())
<html> <body> <a> <b> text1 </b> <c> text2 </c> </a> </body> </html>
1.next_sibling和previous_sibling
sibling_soup.b.next_sibling
<c>text2</c>
sibling_soup.c.previous_sibling
<b>text1</b>
在實(shí)際文檔中.next_sibling和previous_sibling通常是字符串或者空白符
soup.find_all('a')
[<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
soup.a.next_sibling # 第一個(gè)<a></a>的next_sibling是,\n
',\n'
soup.a.next_sibling.next_sibling
<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
2.next_siblings和previous_siblings
for sibling in soup.a.next_siblings: print(repr(sibling))
',\n' <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a> ' and\n' <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a> ';\nand they lived at the bottom of a well.'
for sibling in soup.find(id="link3").previous_siblings: print(repr(sibling))
' and\n' <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a> ',\n' <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a> 'Once upon a time there were three little sisters; and their names were\n'
四、回退與前進(jìn)
1.next_element和previous_element
指向下一個(gè)或者前一個(gè)被解析的對(duì)象(字符串或tag),即深度優(yōu)先遍歷的后序節(jié)點(diǎn)和前序節(jié)點(diǎn)
last_a_tag = soup.find("a", id="link3") print(last_a_tag.next_sibling) print(last_a_tag.next_element)
; and they lived at the bottom of a well. Tillie
last_a_tag.previous_element
' and\n'
2.next_elements和previous_elements
通過.next_elements和previous_elements可以向前或向后訪問文檔的解析內(nèi)容,就好像文檔正在被解析一樣
for element in last_a_tag.next_elements: print(repr(element))
'Tillie' ';\nand they lived at the bottom of a well.' '\n' <p class="story">...</p> '...' '\n'
更多關(guān)于使用Python爬蟲庫BeautifulSoup遍歷文檔樹并對(duì)標(biāo)簽進(jìn)行操作的方法與文章大家可以點(diǎn)擊下面的相關(guān)文章
- python爬蟲學(xué)習(xí)筆記--BeautifulSoup4庫的使用詳解
- Python爬蟲庫BeautifulSoup的介紹與簡單使用實(shí)例
- python使用beautifulsoup4爬取酷狗音樂代碼實(shí)例
- python3 BeautifulSoup模塊使用字典的方法抓取a標(biāo)簽內(nèi)的數(shù)據(jù)示例
- Python如何使用BeautifulSoup爬取網(wǎng)頁信息
- Python爬蟲實(shí)現(xiàn)使用beautifulSoup4爬取名言網(wǎng)功能案例
- Python BeautifulSoup基本用法詳解(通過標(biāo)簽及class定位元素)
相關(guān)文章
Python+OpenCV實(shí)戰(zhàn)之拖拽虛擬方塊的實(shí)現(xiàn)
這篇文章主要介紹了如何利用Python+OpenCV實(shí)現(xiàn)拖拽虛擬方塊的效果,即根據(jù)手指坐標(biāo)位置和矩形的坐標(biāo)位置,判斷手指點(diǎn)是否在矩形上,如果在則矩形跟隨手指移動(dòng),感興趣的可以了解一下2022-08-08Python實(shí)現(xiàn)Word和TXT文件格式之間的相互轉(zhuǎn)換
Word文檔(.doc或.docx)和純文本文件(.txt)是兩種常用的文件格式,本文將詳細(xì)介紹如何使用Python實(shí)現(xiàn)Word和TXT文件格式之間的相互轉(zhuǎn)換,文中有詳細(xì)的代碼示例供大家參考,需要的朋友可以參考下2024-07-07對(duì)python中詞典的values值的修改或新增KEY詳解
今天小編就為大家分享一篇對(duì)python中詞典的values值的修改或新增KEY詳解,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2019-01-01python實(shí)現(xiàn)比較類的兩個(gè)instance(對(duì)象)是否相等的方法分析
這篇文章主要介紹了python實(shí)現(xiàn)比較類的兩個(gè)instance(對(duì)象)是否相等的方法,結(jié)合實(shí)例形式分析了Python判斷類的實(shí)例是否相等的判斷操作實(shí)現(xiàn)技巧,需要的朋友可以參考下2019-06-06Python實(shí)現(xiàn)圖算法、堆操作和并查集代碼實(shí)例
這篇文章主要介紹了Python實(shí)現(xiàn)圖算法、堆操作和并查集代碼實(shí)例,圖算法、堆操作和并查集是計(jì)算機(jī)科學(xué)中常用的數(shù)據(jù)結(jié)構(gòu)和算法,它們?cè)诮鉀Q各種實(shí)際問題中具有重要的應(yīng)用價(jià)值,需要的朋友可以參考下2023-08-08Python運(yùn)行報(bào)錯(cuò)UnicodeDecodeError的解決方法
本文給大家分享的是在Python項(xiàng)目中經(jīng)常遇到的關(guān)于編碼問題的一個(gè)小bug的解決方法以及分析方法,有相同遭遇的小伙伴可以來參考下2016-06-06python DataFrame轉(zhuǎn)dict字典過程詳解
這篇文章主要介紹了python DataFrame轉(zhuǎn)dict字典過程詳解,文中通過示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2019-12-12Python基于smtplib實(shí)現(xiàn)異步發(fā)送郵件服務(wù)
這篇文章主要介紹了Python基于smtplib實(shí)現(xiàn)異步發(fā)送郵件服務(wù),需要的朋友可以參考下2015-05-05