老实说,这个课程有点长,而且有点难,搞了几天终于弄好了。
整个课程,实际上包含四个部分:提取一个真实网页的信息;提取一堆真实网页的信息;提取自己账户上保存的信息;模拟手机端提取一些难以提取的图片。
我的成果
我的代码
from bs4 import BeautifulSoup
import requests
import time
url_saves ='http://www.tripadvisor.cn/Saves?v=full#303955'
url = 'http://www.tripadvisor.cn/Attractions-g664891-Activities-Macau.html'
urls =[ 'http://www.tripadvisor.cn/Attractions-g664891-Activities-oa30-Macau.html#ATTRACTION_LIST'.format(str(i)) for i in range(30,930,30)]
headers = {
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 115Browser/7.0.0',
'Cookie' : 'TASSK=enc%3AEBisFh79eMUzT3MJTUwN%2BMvxieGPPqnq%2BieJmkZ19iLNWHKSWruCw0P2ixRnAMvI0%2BQbDGL4T5c%3D; TAUnique=%1%enc%3A0K2Wy69nxlFpu0sd%2BkOy%2Bg2r1pMQVIUGRHSz2SLqSMkiC9mUUqh3Gg%3D%3D; _jzqckmp=1; __gads=ID=7335946c9dd2b2c6:T=1463646260:S=ALNI_MZKIDpCNf_76WpWe4pteoH0BB7Ldg; TAAuth2=%1%3%3A637ee648eb67e3e41be6f5e2fc85b6b1%3AAFRZN2WL2V59z%2FVe9ajP3C0F%2BLplh7OXkqCsoddosFiI2osT0tQMyVEEFz4%2F8ChdcCJDfMs0M79588stUTZopXo%2FaJ%2Bnf2HLNUpQ2p7v2gAnVTegQC99DChpbFJdRmzZ9tH%2Fux0elV3OGZDaarloitD1sVIsO2ksEcFKn4S4a8FzhqhMzZnslu41MRFq1GtAJoIbifmPstbE0Afw4dZK4oM%3D; _smt_uid=573d80af.bc4ebfa; bdshare_firstime=1463648453166; TATravelInfo=V2*A.2*MG.-1*HP.2*FL.3*RVL.293917_140l503193_140l6427689_140*RS.1; ServerPool=C; ki_t=1463646229531%3B1463646229531%3B1463673603689%3B1%3B13; ki_r=; CM=%1%HanaPersist%2C%2C-1%7Ct4b-pc%2C%2C-1%7CHanaSession%2C%2C-1%7CFtrSess%2C%2C-1%7CRCPers%2C%2C-1%7CHomeAPers%2C%2C-1%7CWShadeSeen%2C%2C-1%7CRCSess%2C%2C-1%7CFtrPers%2C%2C-1%7CHomeASess%2C%2C-1%7Csh%2C%2C-1%7C2016sticksess%2C%2C-1%7CCpmPopunder_1%2C1%2C1463732646%7CCCPers%2C%2C-1%7CCCSess%2C%2C-1%7CWAR_RESTAURANT_FOOTER_SESSION%2C%2C-1%7Csesssticker%2C%2C-1%7C2016stickpers%2C%2C-1%7Ct4b-sc%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS2%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS%2C%2C-1%7Csess_rev%2C3%2C-1%7CSaveFtrPers%2C%2C-1%7CSaveFtrSess%2C%2C-1%7Cpers_rev%2C%2C-1%7CRBASess%2C%2C-1%7Cperssticker%2C%2C-1%7CMetaFtrSess%2C%2C-1%7CRBAPers%2C%2C-1%7CWAR_RESTAURANT_FOOTER_PERSISTANT%2C%2C-1%7CMetaFtrPers%2C%2C-1%7C; TAReturnTo=%1%%2FAttractions-g664891-Activities-Macau.html; roybatty=ANQizvIgk9mg7P1ZdpRYlmCT%2BI4ReEi1jLMRBeLume67cwpQ8f1leiD5rFSZ04pJE6VkPaeLa2OW%2Fh5SlRmreKftvPgy0LjweCDRR9iPoWjtTuPxJ3Jbj%2Be1ydCXLbkwBfZLKD4atIa%2BlbIGdwZqcPcQY8I2JZUjzN1tnrhpjh2m%2C1; NPID=; TASession=%1%V2ID.6B0EA5448E35897C60A5476BB4C9E090*SQ.6*LS.UserReviewController*GR.27*TCPAR.2*TBR.6*EXEX.52*ABTR.23*PPRP.70*PHTB.99*FS.79*CPU.53*HS.popularity*ES.popularity*AS.popularity*DS.5*SAS.popularity*FPS.oldFirst*TS.E2EAD86EE045C9196C22C29430AAF1CB*FA.1*DF.0*LP.%2FAttractions-g664891-Activities-Macau%5C.html*FLO.664891*TRA.true*LD.664891; TAUD=LA-1463673628881-1*LG-29309-2.1.F*LD-29311-.....; Hm_lvt_2947ca2c006be346c7a024ce1ad9c24a=1463646225,1463673598; Hm_lpvt_2947ca2c006be346c7a024ce1ad9c24a=1463673626; _qzja=1.652561790.1463646228659.1463648120069.1463673598195.1463673598195.1463673627363..0.0.16.3; _qzjb=1.1463673598194.2.0.0.0; _qzjc=1; _qzjto=1.0.0; _jzqa=1.2281301167737561300.1463646229.1463648120.1463673598.3; _jzqc=1; _jzqb=1.2.10.1463673598.1'
}
def get_attractions(url,data=None):
wb_data = requests.get(url)
time.sleep(4)
soup = BeautifulSoup(wb_data.text, 'lxml')
titles = soup.select('div.property_title > a[target="_blank"]')
imgs = soup.select('img[width="160"]')
cates = soup.select('div.p13n_reasoning_v2 > a')
# print(titles,imgs,cates,sep=('\n--------------\n'))
if data == None:
for title, img, cate in zip(titles, imgs, cates):
data = {
'tiele': title.get_text(),
'img': img.get('src'),
'cates': list(cate.stripped_strings)
}
print(data)
def get_favs(url,data=None):
wb_date = requests.get(url_saves, headers=headers)
soup = BeautifulSoup(wb_date.text, 'lxml')
titles = soup.select('div > a.location-name')
imgs = soup.select('img.photo_image')
adresses = soup.select('div > span.format_address')
if data == None:
for title, img, adress in zip(titles, imgs, adresses):
data = {
'title': title.get_text(),
'img': img.get('src'),
'adress': list(adress.stripped_strings)
}
print(data)
for singel_url in urls:
get_attractions(singel_url)
我的总结
- 真实网页和本地网页在导入库方面有点不同,需要导入requests库。
import requests
- 真实网页和本地网页在解析网页方面,所用的函数也不同,本地网页的是:
with open (‘文件路径’)as wb_data
真实网页:
wb_data = requests.get(url)
- 真实网页在提取网页信息方面,更为复杂,并非直接复制selector路径就可以了。
例如:
titles = soup.select('div.property_title > a[target="_blank"]')
这里找到的a标签太多了,需要用标签中的某些关键信息进行区分。
又如:
imgs = soup.select('img[width="160"]')
这里通过[width="160"]进行图片区分。
又如:
titles = soup.select('div > a.location-name')
这里通过拿到a标签的a.location-name,然后到网页源代码上搜查这个标签,是否数量一致。如果数量一致,则证明找的路径找对了。
- 用headers来提取浏览器的“User-Agent”和“Cookie”,可以用来模拟用户登陆以及模拟手机的状态。在network上的request里面,提取这两个值。
- 真实网页中,我们想提取一堆的网页,必须观察其中的规律。例如:
urls =[ 'http://www.tripadvisor.cn/Attractions-g664891-Activities-oa30-Macau.html#ATTRACTION_LIST'.format(str(i)) for i in range(30,930,30)]
上面{}就是代入数值的地方,这个网页是以30为排列数值,30、60、90、……、930.
- 在真实网络环境中,很多网页有反爬虫的策略,所以不能太快地不断爬取数据,于是我们引入time库,4秒爬取一次数据。
import time
time.sleep(4)
- 用def关键字来定义,这样就能保证两个关键字同时运行了。
- 有些图片,它是通过js代码来控制,最后展示出一个图片集合。但我们需要的是图片本身,于是,我们有必要运用修改headers来模拟手机网页,最终提取图片。因为手机网页为了增强适应性,会很少使用js,所以提取出来会比较简单。
- 本课有四个知识点:一、解释单个真实网页页面;二、解释一堆连续的网页页面;三、模拟手机来提取一些难搞的图片;四、模拟账户登陆状态,提取自己保存在账户的信息。