之前使用爬虫,主要是requests+BeautifulSoup组合,但是在实际使用中,又发现一个利器lxml。尤其是在读了Scrape the web using CSS Selectors in Python之后,更想试试它了。
这次的目标是盗墓笔记,其实有很多网站有这个小说,但是这个网站的HTML布局比较简单,很适合下手爬。
使用Chrome打开盗墓笔记网站,选中第一章,开始inspect:
很容易看出css是
("#content > div.container > ul > li > a")
然后下面使用lxml对第一章爬正文。类似的操作
这里的css是
("#content > div.post_entry")
在这里也是我弃用BS而使用lxml的主要原因,使用同样的css,BS就拿不到内容。
这里需要注意的是,这个页面的编码是ISO-8859-1,我们在读取的内容要进行解码:
content.encode('ISO-8859-1', 'ignore').decode('utf-8')
最好可以把得到的内容写到txt文件中,发给自己的kindle。
简单代码如下:
#!usr/bin/env
# -*-coding:utf-8 -*-
import requests
import os
import sys
import lxml.html
from bs4 import BeautifulSoup as BS
from lxml.cssselect import CSSSelector
reload(sys)
sys.setdefaultencoding( "utf-8" )
sub_folder = os.path.join(os.getcwd(), "daomubiji")
if not os.path.exists(sub_folder):
os.mkdir(sub_folder)
proxies = {
"http": "http://proxy.yourcompany.com:8080/",
"https": "https://proxy.yourcompany.com:8080/",
}
base_url = 'http://www.nanpaisanshu.org/daomubiji'
r = requests.get(base_url, proxies=proxies)
soup = BS(r.text, "lxml")
url_lists = soup.select("#content > div.container > ul > li > a")
print url_lists[0].get("href")
first_chapter = 'http://www.nanpaisanshu.org/4355.html'
r = requests.get(first_chapter, proxies=proxies)
print r.encoding
soup = BS(r.text.encode('ISO-8859-1', 'ignore').decode('utf-8'), "lxml")
content_lists = soup.select("#content > div.post_entry")
print "Use Requests: ", url_lists[0].get_text().encode('ISO-8859-1', 'ignore').decode('utf-8')
#
first_chapter_url = 'http://www.nanpaisanshu.org/4355.html'
r = requests.get(first_chapter_url, proxies=proxies)
print r.encoding
# build the DOM Tree
tree = lxml.html.fromstring(r.text)
# print the parsed DOM Tree
# print lxml.html.tostring(tree)
#
sel_of_title = CSSSelector('#content > div.post > div.post_title > h2')
results = sel_of_title(tree)
match = results[0]
title = match.text.strip().encode('ISO-8859-1', 'ignore').decode('utf-8')
print "title: ", title
filename = sub_folder + "\\" + title + ".txt"
print filename
# construct a CSS Selector
sel_of_contents = CSSSelector('div.post_entry > p')
#
# Apply the selector to the DOM tree.
results = sel_of_contents(tree)
# print results
#
# print the HTML for the first result.
match = results[0]
# print lxml.html.tostring(match)
#
# print the text of the first result.
print "Use lxml", match.text.encode('ISO-8859-1', 'ignore').decode('utf-8')
#
# get the text out of all the results
data = [result.text for result in results]
with open(filename, "wb") as f:
for content in data:
if content:
f.write("{}\n".format(content.encode('ISO-8859-1', 'ignore').decode('utf-8')))
# print content.encode('ISO-8859-1', 'ignore').decode('utf-8')
f.close()