今日头条爬虫

最近一直在学习python的scrapy框架。写了比较多的小例子。工欲善其事必先利其器。今天描述的就是爬取今日头条的科技板块新闻。练练这把利器。
教程依赖scrapy,pymongo模块,可以直接先下载相应的环境依赖。

  • 1.分析今日头条新闻的API接口
      {
    "has_more": false,
    "message": "success",
    "data": [
      {
        "chinese_tag": "财经",
        "media_avatar_url": "//p3.pstatp.com/large/1233000741099c9f4a59",
        "is_feed_ad": false,
        "tag_url": "news_finance",
        "title": "【特写】数字货币的信徒们",
        "single_mode": true,
        "middle_mode": true,
        "abstract": "在九月初在中国发文整治ICO后,硅谷的区块链项目创业者林吓洪把筹集的资金全部还给了中国投资者们。在那次整治中,监管部门宣布,首次代币发行(Initial Coin Offering,简称ICO)属于非法行为,所有平台必须返还筹集的资金。",
        "tag": "news_finance",
        "label": [
          "数字货币",
          "风投",
          "比特币",
          "投资",
          "经济"
        ],
        "behot_time": 1506326903,
        "source_url": "/group/6469550301866803469/",
        "source": "界面新闻",
        "more_mode": false,
        "article_genre": "article",
        "image_url": "//p1.pstatp.com/list/190x124/317200041ea1cf451f52",
        "has_gallery": false,
        "group_source": 1,
        "comments_count": 10,
        "group_id": "6469550301866803469",
        "media_url": "/c/user/52857496566/"
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/31770009f2c887fdb867",
        "single_mode": true,
        "abstract": "早,来看看今天的新闻。小米就校招风波道歉@DoNews【小米就校招风波道歉 对涉事员工通报批评】近日,一名自称在河南郑州大学日语专业学习的大学生表示,她与同学在一次校园招聘宣讲会上无故被来自小米公司的主管人员讽刺。导致自己和本专业的同学愤然离开。",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_tech",
        "label": [
          "小米科技",
          "亚马逊公司",
          "Uber",
          "美国",
          "乐视"
        ],
        "tag_url": "news_tech",
        "title": "小米就校招风波道歉;ofo正寻求新一轮融资",
        "chinese_tag": "科技",
        "source": "虎嗅APP",
        "group_source": 1,
        "has_gallery": false,
        "media_url": "/c/user/3358265611/",
        "media_avatar_url": "//p2.pstatp.com/large/18a50010126f235bf938",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/31770009f2c887fdb867"
          },
          {
            "url": "//p1.pstatp.com/list/317b00061c410d6d0352"
          },
          {
            "url": "//p3.pstatp.com/list/3172000337e0332b337f"
          }
        ],
        "source_url": "/group/6469472579270672654/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506326303,
        "comments_count": 114,
        "group_id": "6469472579270672654"
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/3c64000074857b07c81d",
        "single_mode": true,
        "abstract": "蓝燕,经常关注香港电影的人应该不陌生,在2011年靠着香港三级影片《3D肉蒲团之极乐宝鉴》走红,并逐渐出现人们的视线中。被称为新一代的“艳星”。可走红后的她并没有获得很好的资源,所接拍的影片大多数是一些不知名的配角。",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_entertainment",
        "label": [
          "蓝燕 ",
          "肉蒲团",
          "投资",
          "娱乐"
        ],
        "tag_url": "news_entertainment",
        "title": "艳星蓝燕美照曝光 靠着《3D肉蒲团》走红",
        "chinese_tag": "娱乐",
        "source": "陪你乐不停",
        "group_source": 2,
        "has_gallery": false,
        "media_url": "/c/user/61497461135/",
        "media_avatar_url": "//p3.pstatp.com/large/382f000f5dd459d0eb74",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3c64000074857b07c81d"
          },
          {
            "url": "//p3.pstatp.com/list/3c6000022fcec3f4ca48"
          },
          {
            "url": "//p3.pstatp.com/list/3c60000230155491a84d"
          }
        ],
        "source_url": "/group/6469578595697164813/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506325703,
        "comments_count": 2,
        "group_id": "6469578595697164813"
      },
      {
        "log_extra": "{\"ad_price\":\"Wci5d__iJRJZyLl3_-IlEuQYjwGdUeJEIl99Ew\",\"convert_id\":0,\"external_action\":0,\"req_id\":\"201709251608231720180471641841E3\",\"rit\":1}",
        "image_url": "//p3.pstatp.com/large/26c00009898dbc9c5a52",
        "read_count": 12196,
        "ban_comment": 1,
        "single_mode": true,
        "abstract": "",
        "image_list": [],
        "has_video": false,
        "article_type": 1,
        "tag": "ad",
        "display_info": "股市迎来重磅利好消息,这些股或将上涨翻倍,微信领取",
        "has_m3u8_video": 0,
        "label": "广告",
        "user_verified": 0,
        "aggr_type": 1,
        "expire_seconds": 314754930,
        "cell_type": 0,
        "article_sub_type": 0,
        "group_flags": 4096,
        "bury_count": 0,
        "title": "股市迎来重磅利好消息,这些股或将上涨翻倍,微信领取",
        "ignore_web_transform": 1,
        "source_icon_style": 3,
        "tip": 0,
        "hot": 0,
        "share_url": "http://m.toutiao.com/group/6465452273144168717/?iid=0&app=news_article",
        "has_mp4_video": 0,
        "source": "联讯证券",
        "comment_count": 0,
        "article_url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "filter_words": [
          {
            "id": "1:74",
            "name": "股票",
            "is_selected": false
          },
          {
            "id": "1:6",
            "name": "金融保险",
            "is_selected": false
          },
          {
            "id": "2:0",
            "name": "来源:联讯证券",
            "is_selected": false
          },
          {
            "id": "4:2",
            "name": "看过了",
            "is_selected": false
          }
        ],
        "has_gallery": false,
        "publish_time": 1505355414,
        "ad_id": 69048936405,
        "action_list": [
          {
            "action": 1,
            "extra": {},
            "desc": ""
          },
          {
            "action": 3,
            "extra": {},
            "desc": ""
          },
          {
            "action": 7,
            "extra": {},
            "desc": ""
          },
          {
            "action": 9,
            "extra": {},
            "desc": ""
          }
        ],
        "has_image": false,
        "cell_layout_style": 1,
        "tag_id": 6465452273144168717,
        "source_url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "video_style": 0,
        "verified_content": "",
        "is_feed_ad": true,
        "large_image_list": [],
        "item_id": 6465452273144168717,
        "natant_level": 2,
        "tag_url": "search/?keyword=None",
        "article_genre": "ad",
        "level": 0,
        "cell_flag": 10,
        "source_open_url": "sslocal://search?from=channel_source&keyword=%E8%81%94%E8%AE%AF%E8%AF%81%E5%88%B8",
        "display_url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "digg_count": 0,
        "behot_time": 1506325103,
        "article_alt_url": "http://m.toutiao.com/group/article/6465452273144168717/",
        "cursor": 1506325103999,
        "url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "preload_web": 0,
        "ad_label": "广告",
        "user_repin": 0,
        "label_style": 3,
        "item_version": 0,
        "group_id": "6465452273144168717",
        "middle_image": {
          "url": "http://p3.pstatp.com/large/26c00009898dbc9c5a52",
          "width": 456,
          "url_list": [
            {
              "url": "http://p3.pstatp.com/large/26c00009898dbc9c5a52"
            },
            {
              "url": "http://pb9.pstatp.com/large/26c00009898dbc9c5a52"
            },
            {
              "url": "http://pb1.pstatp.com/large/26c00009898dbc9c5a52"
            }
          ],
          "uri": "large/26c00009898dbc9c5a52",
          "height": 256
        }
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/3b050002710aff2b3422",
        "single_mode": true,
        "abstract": "如今2017年微信的月活跃用户达9亿,微信成了中国最大用户群体的手机APP,它集通讯、娱乐、支付等于一体。很多朋友习惯每天打开微信收发信息、查看朋友圈动态。",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_tech",
        "label": [
          "移动互联网",
          "微信",
          "泽西岛",
          "美女",
          "欧洲"
        ],
        "tag_url": "news_tech",
        "title": "为什么微信中那么多美女来自安道尔或泽西岛?这是一种暗语吗",
        "chinese_tag": "科技",
        "source": "狮子夜光杯",
        "group_source": 2,
        "has_gallery": false,
        "media_url": "/c/user/53397416061/",
        "media_avatar_url": "//p3.pstatp.com/large/12330013573aaa4c18b1",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3b050002710aff2b3422"
          },
          {
            "url": "//p3.pstatp.com/list/3b05000271096e15298e"
          },
          {
            "url": "//p9.pstatp.com/list/3b080000bdf469bf7330"
          }
        ],
        "source_url": "/group/6467319367565574670/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506324503,
        "comments_count": 46,
        "group_id": "6467319367565574670"
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/3b0f0003c132eb485453",
        "single_mode": true,
        "abstract": "最近几周,各大互联网科技公司都开始秋季招聘了这些是正经的公司的招聘笔试题:关于c++的inline关键字,以下说法正确的是()对N个数进行排序,在各自最优条件下以下算法复杂度最低的是()为百度设计一款新产品,可以结合百度现有的优势和资源,专注解决大学生用户的某个需求痛点,请给出主",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_design",
        "label": [
          "电子商务",
          "京东",
          "面试",
          "刘强东",
          "计算复杂性理论"
        ],
        "tag_url": "search/?keyword=%E8%AE%BE%E8%AE%A1",
        "title": "京东校招笔试题“如何用0.01元买到一瓶可乐”?竟被苏宁秀了一脸",
        "chinese_tag": "设计",
        "source": "小禾科技",
        "group_source": 2,
        "has_gallery": false,
        "media_url": "/c/user/59954335187/",
        "media_avatar_url": "//p9.pstatp.com/large/39b10003f6cddd5128fa",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3b0f0003c132eb485453"
          },
          {
            "url": "//p3.pstatp.com/list/3b110000ab4c79a56483"
          },
          {
            "url": "//p9.pstatp.com/list/3b1600007cde1cf9bdd0"
          }
        ],
        "source_url": "/group/6468140283245625870/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506323903,
        "comments_count": 87,
        "group_id": "6468140283245625870"
      },
      {
        "chinese_tag": "科技",
        "media_avatar_url": "//p9.pstatp.com/large/2c6600049c7144303824",
        "is_feed_ad": false,
        "tag_url": "news_tech",
        "title": "为什么家里的WIFI时快时慢?竟然是因为……",
        "single_mode": true,
        "middle_mode": false,
        "abstract": "现在还是个信息的时代,不仅手机、电脑非常普遍,而且现在的人们都喜欢用无线网络之WiFi,因为这样更加便捷。在家使用手机的时候,不用打开手机的数据流量,只要使用WiFi就可以了,无限的流量使用,太方便了。但是很多用户都会有这样的体验,WiFi速度时快时慢的,很是烦恼。",
        "group_source": 2,
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3b1600009ba8a7500c7e"
          },
          {
            "url": "//p1.pstatp.com/list/3b1600009bb32db8a78a"
          },
          {
            "url": "//p3.pstatp.com/list/3b120000c5dac40ae0fe"
          }
        ],
        "label": [
          "Wi-Fi",
          "科技"
        ],
        "behot_time": 1506323303,
        "source_url": "/group/6468146583144759822/",
        "source": "水电小知识",
        "more_mode": true,
        "article_genre": "article",
        "image_url": "//p3.pstatp.com/list/190x124/3b1600009ba8a7500c7e",
        "tag": "news_tech",
        "has_gallery": false,
        "group_id": "6468146583144759822",
        "media_url": "/c/user/61795844218/"
      }
    ],
    "next": {
      "max_behot_time": 1506323303
      }
    }
    
  • 2.分析请求的参数以及请求循环性:
    • 科技新闻的数据接口使用的是GET请求,传递下面几个查询参数:
      category:news_tech
      utm_source:toutiao
      widen:1
      max_behot_time:0
      max_behot_time_tmp:0
      tadrequire:true
      as:A155493CA8EBB0F
      cp:59C84BEB601F7E1
    
    • 滑动网页,再次发出异步请求,观察请求参数,可以发现只有几个查询参数是改变的。从上一次获取的数据有个字段next->max_behot_time刚好是max_behot_time和max_behot_time_tmp的值。至于as与及cp参数对GET请求影响不大,可以直接取某一次分析的参数值就是max_behot_time参数,作者认为是当前的时间戳,现在数据已经展示给我们,我们就没必要去猜测,有时候抓包分析就是一种猜测API参数意义的过程,大家可以去验证:
      max_behot_time:1506326351
      max_behot_time_tmp:1506326351
      as:A115996C383BD3C
      cp:59C82BAD839CBE1
    
  • 3.构造请请求地址:
    • scrapy项目的目录结构如下所示:
      结构图
    • settings.py源码如下:
  # -*- coding: utf-8 -*-
# Scrapy settings for todayNews project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'todayNews'

SPIDER_MODULES = ['todayNews.spiders']
NEWSPIDER_MODULE = 'todayNews.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'todayNews (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept':'text/javascript, text/html, application/xml, text/xml, */*',
    'Accept-Encoding':'gzip, deflate, sdch, br',
    'Accept-Language':'zh-CN,zh;q=0.8',
    'Cache-Control':'no-cache',
    'Connection':'keep-alive',
    'Content-Type':'application/x-www-form-urlencoded',
    'Cookie':'uuid="w:3db0708ea2c549fab1a5371c56f16176"; UM_distinctid=15c7147fecd8d-0a4277451-4349052c-100200-15c7147fecf6f; csrftoken=af9a5a0d4cd30794e6c04511ca9f31eb; _ga=GA1.2.312467779.1496549163; __guid=32687416.738502311042654200.1505560389379.9048; tt_track_id=c7baa73a99ec9787ead7a2f6b01ff56b; _ba=BA0.2-20170923-51d9e-ErxmsyZIIoxNOzZgf6Us; tt_webid=6427627096743282178; WEATHER_CITY=%E5%8C%97%E4%BA%AC; CNZZDATA1259612802=610804389-1496543540-null%7C1506261975; __tasessionId=0vta7k1uc1506263833592; tt_webid=6427627096743282178',
    'Host':'www.toutiao.com',
    'Pragma':'no-cache',
    'Referer':'https://www.toutiao.com/ch/news_tech/',
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
    'X-Requested-With':'XMLHttpRequest'
}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'todayNews.middlewares.TodaynewsSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'todayNews.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {  
   'todayNews.pipelines.MongoPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
DOWNLOAD_DELAY = 1   
MONGO_URI="localhost"
MONGO_DATABASE="toutiao"
MONGO_USER="username"
MONGO_PASS="password"
  • pipelines源码如下:
  # -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


import pymongo

class MongoPipeline(object):
  collection_name="science"
  def __init__(self,mongo_uri,mongo_db,mongo_user,mongo_pass):
      self.mongo_uri=mongo_uri
      self.mongo_db=mongo_db
      self.mongo_user=mongo_user
      self.mongo_pass=mongo_pass
  @classmethod
  def from_crawler(cls,crawler):
      return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DATABASE'),mongo_user=crawler.settings.get("MONGO_USER"),mongo_pass=crawler.settings.get("MONGO_PASS"))
  def open_spider(self, spider):
      self.client = pymongo.MongoClient(self.mongo_uri)
      self.db = self.client[self.mongo_db]
      self.db.authenticate(self.mongo_user,self.mongo_pass)
      
  def close_spider(self, spider):
      self.client.close()

  def process_item(self, item, spider):
      # self.db[self.collection_name].update({'url_token': item['url_token']}, {'$set': dict(item)}, True)
      # return item
      self.db[self.collection_name].insert(dict(item))
      return item
  • toutiao.py源码如下:
  # -*- coding: utf-8 -*-
from scrapy import Spider,Request
import json
import logging
from todayNews.items import TodaynewsItem
class ToutiaoSpider(Spider):
  name = "toutiao"
  allowed_domains = ["www.toutiao.com"]
  start_urls = ['https://www.toutiao.com/api/pc/feed/?min_behot_time=0&category=__all__&utm_source=toutiao&widen=1&tadrequire=true&as=A1D5394CB72C38F&cp=59C71C03883F0E1']
  url='https://www.toutiao.com/api/pc/feed/?category=news_tech&utm_source=toutiao&widen=1&max_behot_time={behot_time}&max_behot_time_tmp={behot_time_tmp}&tadrequire=true&as=A165E92C97CC487&cp=59C74CC4E8F7BE1'
  def parse(self, response):
      jsonData=json.loads(response.body.decode("utf-8"))
      MainData=jsonData["data"]
      nextTime=jsonData["next"]["max_behot_time"]
      if jsonData["message"]=='success':
          for rowData in MainData:
              yield rowData
          yield Request(url=self.url.format(behot_time=nextTime,behot_time_tmp=nextTime),callback=self.parse)
      else:
          logging.info("The Data is null")
      
  • items定义数据结构化的提取,因为今日头条返回的json格式并不是规范(可以查阅上面展示的数据),所以并没有定义提取的item值。而是直接把items传递到pipeline梳理保存在MongoDB上面。
  • 4.启动爬虫程序,并查看爬取到数据


    保存的数据
完工
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 201,924评论 5 474
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 84,781评论 2 378
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 148,813评论 0 335
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,264评论 1 272
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,273评论 5 363
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,383评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,800评论 3 393
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,482评论 0 256
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,673评论 1 295
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,497评论 2 318
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,545评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,240评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,802评论 3 304
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,866评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,101评论 1 258
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,673评论 2 348
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,245评论 2 341

推荐阅读更多精彩内容