亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

使用Python爬蟲庫BeautifulSoup遍歷文檔樹并對(duì)標(biāo)簽進(jìn)行操作詳解

 更新時(shí)間:2020年01月25日 16:19:08   作者:BQW_  
今天為大家介紹下Python爬蟲庫BeautifulSoup遍歷文檔樹并對(duì)標(biāo)簽進(jìn)行操作的詳細(xì)方法與函數(shù)

下面就是使用Python爬蟲庫BeautifulSoup對(duì)文檔樹進(jìn)行遍歷并對(duì)標(biāo)簽進(jìn)行操作的實(shí)例,都是最基礎(chǔ)的內(nèi)容

html_doc = """
<html><head><title>The Dormouse's story</title></head>

<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>,
<a  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and
<a  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc,'lxml')

一、子節(jié)點(diǎn)

一個(gè)Tag可能包含多個(gè)字符串或者其他Tag,這些都是這個(gè)Tag的子節(jié)點(diǎn).BeautifulSoup提供了許多操作和遍歷子結(jié)點(diǎn)的屬性。

1.通過Tag的名字來獲得Tag

print(soup.head)
print(soup.title)
<head><title>The Dormouse's story</title></head>
<title>The Dormouse's story</title>

通過名字的方法只能獲得第一個(gè)Tag,如果要獲得所有的某種Tag可以使用find_all方法

soup.find_all('a')
[<a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>,
 <a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>,
 <a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]

2.contents屬性:將Tag的子節(jié)點(diǎn)通過列表的方式返回

head_tag = soup.head
head_tag.contents
[<title>The Dormouse's story</title>]
title_tag = head_tag.contents[0]
title_tag
<title>The Dormouse's story</title>
title_tag.contents
["The Dormouse's story"]

3.children:通過該屬性對(duì)子節(jié)點(diǎn)進(jìn)行循環(huán)

for child in title_tag.children:
  print(child)
The Dormouse's story

4.descendants: 不論是contents還是children都是返回直接子節(jié)點(diǎn),而descendants對(duì)所有tag的子孫節(jié)點(diǎn)進(jìn)行遞歸循環(huán)

for child in head_tag.children:
  print(child)
<title>The Dormouse's story</title>
for child in head_tag.descendants:
  print(child)
<title>The Dormouse's story</title>
The Dormouse's story

5.string 如果tag只有一個(gè)NavigableString類型的子節(jié)點(diǎn),那么tag可以使用.string得到該子節(jié)點(diǎn)

title_tag.string
"The Dormouse's story"

如果一個(gè)tag只有一個(gè)子節(jié)點(diǎn),那么使用.string可以獲得其唯一子結(jié)點(diǎn)的NavigableString.

head_tag.string
"The Dormouse's story"

如果tag有多個(gè)子節(jié)點(diǎn),tag無法確定.string對(duì)應(yīng)的是那個(gè)子結(jié)點(diǎn)的內(nèi)容,故返回None

print(soup.html.string)
None

6.strings和stripped_strings

如果tag包含多個(gè)字符串,可以使用.strings循環(huán)獲取

for string in soup.strings:
  print(string)
The Dormouse's story


The Dormouse's story


Once upon a time there were three little sisters; and their names were

Elsie
,

Lacie
 and

Tillie
;
and they lived at the bottom of a well.


...

.string輸出的內(nèi)容包含了許多空格和空行,使用strpped_strings去除這些空白內(nèi)容

for string in soup.stripped_strings:
  print(string)
The Dormouse's story
The Dormouse's story
Once upon a time there were three little sisters; and their names were
Elsie
,
Lacie
and
Tillie
;
and they lived at the bottom of a well.
...

二、父節(jié)點(diǎn)

1.parent:獲得某個(gè)元素的父節(jié)點(diǎn)

title_tag = soup.title
title_tag.parent
<head><title>The Dormouse's story</title></head>

字符串也有父節(jié)點(diǎn)

title_tag.string.parent
<title>The Dormouse's story</title>

2.parents:遞歸的獲得所有父輩節(jié)點(diǎn)

link = soup.a
for parent in link.parents:
  if parent is None:
    print(parent)
  else:
    print(parent.name)
p
body
html
[document]

三、兄弟結(jié)點(diǎn)

sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>",'lxml')
print(sibling_soup.prettify())
<html>
 <body>
 <a>
  <b>
  text1
  </b>
  <c>
  text2
  </c>
 </a>
 </body>
</html>

1.next_sibling和previous_sibling

sibling_soup.b.next_sibling
<c>text2</c>
sibling_soup.c.previous_sibling
<b>text1</b>

在實(shí)際文檔中.next_sibling和previous_sibling通常是字符串或者空白符

soup.find_all('a')
[<a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>,
 <a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>,
 <a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
soup.a.next_sibling # 第一個(gè)<a></a>的next_sibling是,\n
',\n'
soup.a.next_sibling.next_sibling
<a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>

2.next_siblings和previous_siblings

for sibling in soup.a.next_siblings:
  print(repr(sibling))
',\n'
<a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
' and\n'
<a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>
';\nand they lived at the bottom of a well.'
for sibling in soup.find(id="link3").previous_siblings:
  print(repr(sibling))
' and\n'
<a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
',\n'
<a class="sister"  rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>
'Once upon a time there were three little sisters; and their names were\n'

四、回退與前進(jìn)

1.next_element和previous_element

指向下一個(gè)或者前一個(gè)被解析的對(duì)象(字符串或tag),即深度優(yōu)先遍歷的后序節(jié)點(diǎn)和前序節(jié)點(diǎn)

last_a_tag = soup.find("a", id="link3")
print(last_a_tag.next_sibling)
print(last_a_tag.next_element)
;
and they lived at the bottom of a well.
Tillie
last_a_tag.previous_element
' and\n'

2.next_elements和previous_elements

通過.next_elements和previous_elements可以向前或向后訪問文檔的解析內(nèi)容,就好像文檔正在被解析一樣

for element in last_a_tag.next_elements:
  print(repr(element))
'Tillie'
';\nand they lived at the bottom of a well.'
'\n'
<p class="story">...</p>
'...'
'\n'

更多關(guān)于使用Python爬蟲庫BeautifulSoup遍歷文檔樹并對(duì)標(biāo)簽進(jìn)行操作的方法與文章大家可以點(diǎn)擊下面的相關(guān)文章

相關(guān)文章

最新評(píng)論