爬虫 + CNN(卷积神经网络)实现名家画作识别与分类

例子描述:

通过用CNN网络对 梵高,莫奈,毕加索,达芬奇 四位画家的作品进行学习,学出一个模型,这个模型具有识别这个四位画家作品的能力。

所需环境:Python3.6 + Tensorflow

如果使用cpu版本,可以参考:https://www.jianshu.com/p/da141c730180
如果使用gpu版本,可以参考:https://www.jianshu.com/p/62d414aa843e

3个步骤:
  1. 使用爬虫爬去百度图片
  2. 搭建神经网络,训练,产生模型
  3. 使用产生的模型,识别与分类

1. 使用爬虫爬去百度图片

通过chrome开发者工具分析,我们得到一个百度图片的api接口,通过接口的数据可以拿到百度图片的地址,如图:


分析百度图片网站,找到获取图片的接口

得到的这个地址是:https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E6%A2%B5%E9%AB%98%E4%BD%9C%E5%93%81&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=&hd=&latest=&copyright=&word=%E6%A2%B5%E9%AB%98%E4%BD%9C%E5%93%81&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn=60&rn=30&gsm=3c&1550715038298=

用过分析,这个url地址的主要的三个参数是:

  1. pn: 当前页的图片数量偏移量,如 60 表示当前页是第二页,图片数的偏移是60
  2. rn: 每页返回多少图片,如 30 表示每页三十张图片
  3. queryWordh和word:搜索关键字,如 :梵高作品

我们只要调整这些参数,就可以获取任意的百度图片和图片数量,然后通过python代码爬去图片保存到本地磁盘目录。

新建文件:spider.py
代码如下:

import requests
import os
import urllib
import json
#定义下载图片的函数
def downImg(imgUrl, dirPath, imgName):
    filename = os.path.join(dirPath, imgName)
    try:
        #加Referer头,防止百度拒绝你的请求
        myheaders = {
            'Referer':'https://image.baidu.com'
        }
        res = requests.get(imgUrl, timeout=15,headers=myheaders)
        if str(res.status_code)[0] == "4":
            print(str(res.status_code), ":", imgUrl)
            return False
    except Exception as e:
        print("抛出异常:", imgUrl)
        print(e)
        return False
    with open(filename, "wb") as f:
        f.write(res.content)
    return True

words = [["梵高作品",'FG'],['莫奈作品','MN'],['毕加索作品','BJS'],['达芬奇作品','DFQ']] #搜索关键字,如 :梵高作品
trainPath = "train_data/"
#如果文件夹不存在,创建文件夹
if not os.path.exists(trainPath):
    os.mkdir(trainPath)
for word in words:
    dirPath = trainPath + word[1]
    # 如果文件夹不存在,创建文件夹
    if not os.path.exists(dirPath):
        os.mkdir(dirPath)
    word = urllib.parse.quote(word[0]) #因为是中文,所以要进行urlencode转换
    pn = 30  #当前页的图片数量偏移量,如 60 表示当前页是第二页,图片数的偏移是60
    rn = 30  #每每页返回多少图片,如 30 表示每页三十张图片
    i = 1 #图片编号
    while pn <= 30 * 20: #获取20页的图片,总共600张,建议修改页数,爬去更多一点的图片
        try:
            url = 'https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=' + word + '&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=&hd=&latest=&copyright=&word=' + word + '=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn=' + str(
                pn) + '&rn=' + str(rn) + '&gsm=3c&1550715038298='
            jsonBytes = requests.get(url, timeout=10).content  # 获取json数据-字节
            jsonData = jsonBytes.decode('utf-8')  # json数据-字节转字符串
            print("---------------------------------------------------------")
            jsonData = jsonData.replace("\\'", '') #不加这个字符串替换json.loads时会报错,意思是去掉字符串中的\'
            print(jsonData)
            print("---------------------------------------------------------")
            jsonObj = json.loads(jsonData)  # json数据-字符串转对象
            if 'data' in jsonObj:
                for item in jsonObj['data']:
                    if 'thumbURL' in item:
                        imgName = str(i) + ".jpg"
                        downImg(item['thumbURL'], dirPath, imgName)  # 下载图片
                        print(item['thumbURL'])
                        i += 1
            pn += rn  # 下一页
        except Exception as e:
            print(e)


代码执行完成后,在当前目录下,我们就得到了后面训练用的样本数据,目录文件如下:

image.png

到此,样本数据就准备好了,下面我们要搭建神经网络了。

2. 搭建神经网络,读取图片,训练,产生模型

这里要用到opencv,所以要安装opencv模块

# 安装
pip install http://ai-download.xmgc360.com/opencv_python-3.3.0.10-cp36-cp36m-win_amd64.whl

还需安装 sklearn 模块

pip install sklearn  -i https://pypi.tuna.tsinghua.edu.cn/simple

新建文件 dataset.py ,用于读取图片并预处理,代码如下:

import cv2
import os
import glob
from sklearn.utils import shuffle
import numpy as np
def load_train(train_path, image_size, classes):
    images = []
    labels = []
    img_names = []
    cls = []
    print('Going to read training images')
    for fields in classes:
        index = classes.index(fields)
        print('Now going to read {} files (Index: {})'.format(fields, index))
        path = os.path.join(train_path, fields, '*g')
        files = glob.glob(path)
        for fl in files:
            try:
                #读取图片
                image = cv2.imread(fl)
                #等比例压缩到64*64
                image = cv2.resize(image, (image_size, image_size), 0, 0, cv2.INTER_LINEAR)
                #转为浮点型
                image = image.astype(np.float32)
                #归一化处理
                image = np.multiply(image, 1.0 / 255.0)
                images.append(image)
                label = np.zeros(len(classes))
                label[index] = 1.0
                labels.append(label)
                flbase = os.path.basename(fl)
                img_names.append(flbase)
                cls.append(fields)
            except Exception as e:
                print(e)

    images = np.array(images)
    labels = np.array(labels)
    img_names = np.array(img_names)
    cls = np.array(cls)

    return images, labels, img_names, cls


class DataSet(object):

  def __init__(self, images, labels, img_names, cls):
    self._num_examples = images.shape[0]

    self._images = images
    self._labels = labels
    self._img_names = img_names
    self._cls = cls
    self._epochs_done = 0
    self._index_in_epoch = 0

  @property
  def images(self):
    return self._images

  @property
  def labels(self):
    return self._labels

  @property
  def img_names(self):
    return self._img_names

  @property
  def cls(self):
    return self._cls

  @property
  def num_examples(self):
    return self._num_examples

  @property
  def epochs_done(self):
    return self._epochs_done

  def next_batch(self, batch_size):
    """Return the next `batch_size` examples from this data set."""
    start = self._index_in_epoch
    self._index_in_epoch += batch_size

    if self._index_in_epoch > self._num_examples:
      # After each epoch we update this
      self._epochs_done += 1
      start = 0
      self._index_in_epoch = batch_size
      assert batch_size <= self._num_examples
    end = self._index_in_epoch

    return self._images[start:end], self._labels[start:end], self._img_names[start:end], self._cls[start:end]


def read_train_sets(train_path, image_size, classes, validation_size):
  class DataSets(object):
    pass
  data_sets = DataSets()

  images, labels, img_names, cls = load_train(train_path, image_size, classes)
  images, labels, img_names, cls = shuffle(images, labels, img_names, cls)

  if isinstance(validation_size, float):
    validation_size = int(validation_size * images.shape[0])

  validation_images = images[:validation_size]
  validation_labels = labels[:validation_size]
  validation_img_names = img_names[:validation_size]
  validation_cls = cls[:validation_size]

  train_images = images[validation_size:]
  train_labels = labels[validation_size:]
  train_img_names = img_names[validation_size:]
  train_cls = cls[validation_size:]

  data_sets.train = DataSet(train_images, train_labels, train_img_names, train_cls)
  data_sets.valid = DataSet(validation_images, validation_labels, validation_img_names, validation_cls)

  return data_sets

新建 train.py 文件,搭建神经网络,训练,产生模型,代码如下:

import dataset
import tensorflow as tf
import time
from datetime import timedelta
import math
import random
import numpy as np
# conda install --channel https://conda.anaconda.org/menpo opencv3
#Adding Seed so that random initialization is consistent
from numpy.random import seed
seed(10)
from tensorflow import set_random_seed
set_random_seed(20)


batch_size = 32

#Prepare input data
classes = ['BJS','DFQ','FG','MN']
num_classes = len(classes)

# 20% of the data will automatically be used for validation
validation_size = 0.2
img_size = 64
num_channels = 3
train_path='train_data'

# We shall load all the training and validation images and labels into memory using openCV and use that during training
data = dataset.read_train_sets(train_path, img_size, classes, validation_size=validation_size)


print("Complete reading input data. Will Now print a snippet of it")
print("Number of files in Training-set:\t\t{}".format(len(data.train.labels)))
print("Number of files in Validation-set:\t{}".format(len(data.valid.labels)))



session = tf.Session()
x = tf.placeholder(tf.float32, shape=[None, img_size,img_size,num_channels], name='x')

## labels
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)



##Network graph params
filter_size_conv1 = 3
num_filters_conv1 = 32

filter_size_conv2 = 3
num_filters_conv2 = 32

filter_size_conv3 = 3
num_filters_conv3 = 64

fc_layer_size = 1024

def create_weights(shape):
    return tf.Variable(tf.truncated_normal(shape, stddev=0.05))

def create_biases(size):
    return tf.Variable(tf.constant(0.05, shape=[size]))



def create_convolutional_layer(input,
               num_input_channels,
               conv_filter_size,
               num_filters):

    ## We shall define the weights that will be trained using create_weights function. 3 3 3 32
    weights = create_weights(shape=[conv_filter_size, conv_filter_size, num_input_channels, num_filters])
    ## We create biases using the create_biases function. These are also trained.
    biases = create_biases(num_filters)

    ## Creating the convolutional layer
    layer = tf.nn.conv2d(input=input,
                     filter=weights,
                     strides=[1, 1, 1, 1],
                     padding='SAME')

    layer += biases

    layer = tf.nn.relu(layer)

    ## We shall be using max-pooling.
    layer = tf.nn.max_pool(value=layer,
                            ksize=[1, 2, 2, 1],
                            strides=[1, 2, 2, 1],
                            padding='SAME')
    ## Output of pooling is fed to Relu which is the activation function for us.
    #layer = tf.nn.relu(layer)

    return layer



def create_flatten_layer(layer):
    #We know that the shape of the layer will be [batch_size img_size img_size num_channels]
    # But let's get it from the previous layer.
    layer_shape = layer.get_shape()

    ## Number of features will be img_height * img_width* num_channels. But we shall calculate it in place of hard-coding it.
    num_features = layer_shape[1:4].num_elements()

    ## Now, we Flatten the layer so we shall have to reshape to num_features
    layer = tf.reshape(layer, [-1, num_features])

    return layer


def create_fc_layer(input,
             num_inputs,
             num_outputs,
             use_relu=True):

    #Let's define trainable weights and biases.
    weights = create_weights(shape=[num_inputs, num_outputs])
    biases = create_biases(num_outputs)

    # Fully connected layer takes input x and produces wx+b.Since, these are matrices, we use matmul function in Tensorflow
    layer = tf.matmul(input, weights) + biases

    layer=tf.nn.dropout(layer,keep_prob=0.7)

    if use_relu:
        layer = tf.nn.relu(layer)


    return layer

#卷积层1(包括卷积,池化,激活)
layer_conv1 = create_convolutional_layer(input=x,
               num_input_channels=num_channels,
               conv_filter_size=filter_size_conv1,
               num_filters=num_filters_conv1)
#卷积层2(包括卷积,池化,激活)
layer_conv2 = create_convolutional_layer(input=layer_conv1,
               num_input_channels=num_filters_conv1,
               conv_filter_size=filter_size_conv2,
               num_filters=num_filters_conv2)
#卷积层3(包括卷积,池化,激活)
layer_conv3= create_convolutional_layer(input=layer_conv2,
               num_input_channels=num_filters_conv2,
               conv_filter_size=filter_size_conv3,
               num_filters=num_filters_conv3)
#把上面三个卷积层处理后的结果转化为一维向量,才能提供给全连层
layer_flat = create_flatten_layer(layer_conv3)
#全连接层1
layer_fc1 = create_fc_layer(input=layer_flat,
                     num_inputs=layer_flat.get_shape()[1:4].num_elements(),
                     num_outputs=fc_layer_size,
                     use_relu=True)
#全连接层2
layer_fc2 = create_fc_layer(input=layer_fc1,
                     num_inputs=fc_layer_size,
                     num_outputs=num_classes,
                     use_relu=False)

y_pred = tf.nn.softmax(layer_fc2,name='y_pred')

y_pred_cls = tf.argmax(y_pred, dimension=1)
session.run(tf.global_variables_initializer())
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
                                                    labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))


session.run(tf.global_variables_initializer())


def show_progress(epoch, feed_dict_train, feed_dict_validate, val_loss,i):
    acc = session.run(accuracy, feed_dict=feed_dict_train)
    val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
    msg = "Training Epoch {0}--- iterations: {1}--- Training Accuracy: {2:>6.1%}, Validation Accuracy: {3:>6.1%},  Validation Loss: {4:.3f}"
    print(msg.format(epoch + 1,i, acc, val_acc, val_loss))

total_iterations = 0

saver = tf.train.Saver()
def train(num_iteration):
    global total_iterations

    for i in range(total_iterations,
                   total_iterations + num_iteration):

        x_batch, y_true_batch, _, cls_batch = data.train.next_batch(batch_size)
        x_valid_batch, y_valid_batch, _, valid_cls_batch = data.valid.next_batch(batch_size)


        feed_dict_tr = {x: x_batch,
                           y_true: y_true_batch}
        feed_dict_val = {x: x_valid_batch,
                              y_true: y_valid_batch}

        session.run(optimizer, feed_dict=feed_dict_tr)

        if i % int(data.train.num_examples/batch_size) == 0:
            val_loss = session.run(cost, feed_dict=feed_dict_val)
            epoch = int(i / int(data.train.num_examples/batch_size))

            show_progress(epoch, feed_dict_tr, feed_dict_val, val_loss,i)
            saver.save(session, './model/painting.ckpt',global_step=i)

    total_iterations += num_iteration

train(num_iteration=8000)

相关目录

运行 train.py 进行训练 , 如图:


image.png

训练中结果截图:


训练中...

等训练完成后,会传输模型文件,如图:

模型文件

产生模型以后,我们使用最新的模型文件来预测,这里我们使用:

painting.ckpt-7998.meta 存储的是神经网络结构
painting.ckpt-7998.data 模型数据本身
然后在下面的代码里引用

3. 识别与分类

新建文件:predict.py,代码中加载模型,制定预测的文件名 fg_test_1.jpg。

image.png

代码如下:

import tensorflow as tf
import numpy as np
import os,glob,cv2
import sys,argparse

image_size=64
num_channels=3
images = []

path = 'fg_test_1.jpg'
image = cv2.imread(path)
# Resizing the image to our desired size and preprocessing will be done exactly as done during training
image = cv2.resize(image, (image_size, image_size),0,0, cv2.INTER_LINEAR)
images.append(image)
images = np.array(images, dtype=np.uint8)
images = images.astype('float32')
images = np.multiply(images, 1.0/255.0)
#The input to the network is of shape [None image_size image_size num_channels]. Hence we reshape.
x_batch = images.reshape(1, image_size,image_size,num_channels)

## Let us restore the saved model
sess = tf.Session()
# Step-1: Recreate the network graph. At this step only graph is created.
saver = tf.train.import_meta_graph('./model/painting.ckpt-7998.meta')
# Step-2: Now let's load the weights saved using the restore method.
saver.restore(sess, './model/painting.ckpt-7998')

# Accessing the default graph which we have restored
graph = tf.get_default_graph()

# Now, let's get hold of the op that we can be processed to get the output.
# In the original network y_pred is the tensor that is the prediction of the network
y_pred = graph.get_tensor_by_name("y_pred:0")

## Let's feed the images to the input placeholders
x= graph.get_tensor_by_name("x:0")
y_true = graph.get_tensor_by_name("y_true:0")
y_test_images = np.zeros((1, 4))


### Creating the feed_dict that is required to be fed to calculate y_pred
feed_dict_testing = {x: x_batch, y_true: y_test_images}
result=sess.run(y_pred, feed_dict=feed_dict_testing)
# result is of this format [probabiliy_of_rose probability_of_sunflower]
# dog [1 0]
res_label = ['BJS','DFQ','FG','MN']
print(res_label[result.argmax()])
设定分类参数

预测文件:fg_test_1.jpg,放到当前目录下

fg_test_1.jpg

预测结果如图:


预测代码执行结果

结果是:FG,表示识别成功。

备注:

目录结构如下图:


目录结构

附带窗口图形化预测代码:

所需安装模块:

pip install pillow  -i https://pypi.tuna.tsinghua.edu.cn/simple

新建文件:prodict_gui.py,拷贝下面代码:

from tkinter import *
from tkinter import filedialog
from PIL import Image, ImageTk
import tensorflow as tf
import numpy as np
import cv2
import tkinter
import tkinter.messagebox

image_size=64
num_channels=3
images = []
filepath = ''


## 启动session
sess = tf.Session()
# 在家模型图结构
saver = tf.train.import_meta_graph('./model/painting.ckpt-145.meta')
# 加载模型权重
saver.restore(sess, './model/painting.ckpt-145')

# 获取图结构
graph = tf.get_default_graph()

if __name__ == "__main__":
    root = Tk()
    root.title('图形预测窗口')
    #setting up a tkinter canvas with scrollbars
    frame = Frame(root, bd=2, relief=SUNKEN)
    frame.grid_rowconfigure(0, weight=1)
    frame.grid_columnconfigure(0, weight=1)
    xscroll = Scrollbar(frame, orient=HORIZONTAL)
    xscroll.grid(row=1, column=0, sticky=E+W)
    yscroll = Scrollbar(frame)
    yscroll.grid(row=0, column=1, sticky=N+S)
    canvas = Canvas(frame, bd=0, xscrollcommand=xscroll.set, yscrollcommand=yscroll.set)
    canvas.grid(row=0, column=0, sticky=N+S+E+W)
    xscroll.config(command=canvas.xview)
    yscroll.config(command=canvas.yview)
    frame.pack(fill=BOTH,expand=1)

    def printcoords():
        global filepath
        File = filedialog.askopenfilename(parent=root, initialdir="D:/",title='Choose an image.')
        filename = ImageTk.PhotoImage(Image.open(File))
        canvas.image = filename
        canvas.create_image(0,0,anchor='nw',image=filename)
        filepath =  File

    def predict():
        image_size = 64
        num_channels = 3
        images = []

        path = filepath
        print(path)
        #image = cv2.imread(path) #不支持中文路径
        image = cv2.imdecode(np.fromfile(path,dtype=np.uint8),-1) #支持中文路径

        image = cv2.resize(image, (image_size, image_size), 0, 0, cv2.INTER_LINEAR)
        images.append(image)
        images = np.array(images, dtype=np.uint8)
        images = images.astype('float32')
        images = np.multiply(images, 1.0 / 255.0)

        x_batch = images.reshape(1, image_size, image_size, num_channels)



        # 获取tensor : y_pred
        y_pred = graph.get_tensor_by_name("y_pred:0")

        # 获取tensor : x
        x = graph.get_tensor_by_name("x:0")
        # 获取tensor : y_true
        y_true = graph.get_tensor_by_name("y_true:0")
        y_test_images = np.zeros((1, 4))


        feed_dict_testing = {x: x_batch, y_true: y_test_images}

        #run测试数据
        result = sess.run(y_pred, feed_dict=feed_dict_testing)

        res_label = ['这幅画作者毕加索','这幅画作者达芬奇', '这幅画作者梵高', '这幅画作者莫奈']
        tkinter.messagebox.showinfo("图形预测结果",res_label[result.argmax()])

    Button(root, text='2、图形预测', command=predict).pack(side=RIGHT)
    Button(root,text='1、选择图片',command=printcoords).pack(side=RIGHT)
    label = Label(root, text='请依次点击按钮>>>>>>')
    label.pack(side=RIGHT)
    root.mainloop()

界面截图:

image.png
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 202,802评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,109评论 2 379
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 149,683评论 0 335
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,458评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,452评论 5 364
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,505评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,901评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,550评论 0 256
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,763评论 1 296
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,556评论 2 319
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,629评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,330评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,898评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,897评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,140评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,807评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,339评论 2 342