接我的上一篇“Ubuntu16.04+Cuda8.0+Theano深度学习环境搭建一”
一、安装anaconda
1、 到官网下载anaconda,根据自己python版本和位数下载对应的版本
2、打开anaconda下载的文件夹,然后安装anaconda,在终端输入:
cd ~/Downloads
bash Anaconda3-.4.0-Linux-x86_64.sh
然后一路回车和yes
3、在终端输入Python发现依然是gnome自带的python版本,这是因为.bashrc的更新还没有生效,命令行输入:source ~/.bashrc
二、安装Theano
sudo pip install Theano
安装之后默认的是用cpu,这个时候需要修改设置来使用gpu进行计算:
步骤一:ctrl+alt+t打开终端,打开theanorc文件
sudo gedit ~/.theanorc
空白文件中写入:
[global]
floatX=float32
device=cpu
步骤二:修改内容在上边基础上修改
使用gpu加速时,把上边device修改,同时添加新的内容
[global]
floatX=float32
device=gpu
[cuda]
root=/usr/local/cuda-8.0
三、安装Keras
pip install keras
安装之后的默认keras的后端是TensorFlow,所以如果想要切换到Theano的话需要
1、keras后端配置是在文件keras.json 这个文件中进行的,因此找到目录,不存的话会自动创建
sudo gedit ~/.keras/keras.json
2、theano作为backend
{
"image_dim_ordering":"th",
"epsilon":1e-07,
"floatx":"float32",
"backend":"theano"
}
四、程序测试,代码来自于Ubuntu15.10_64位安装Theano+cuda7.5详细笔记
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen=10*30*768 #10*cores x#threads per core
iters=1000
rng=numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print ('Looping %d times took'% iters, t1 - t0,'seconds')
print ('Result is', r)
if numpy.any([isinstance(x.op, T.Elemwise)forxinf.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
当出现以下结果的时候说明安装成功:
runfile('/home/isi/.config/spyder/temp.py', wdir='/home/isi/.config/spyder')WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be removed in the next release (v0.10). Please switch to the gpuarray backend. You can get more information about how to switch at this URL: https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29Using gpu device 0: GeForce GTX 1050 (CNMeM is disabled, cuDNN not available)
或者
runfile('/home/isi/.config/spyder/temp.py', wdir='/home/isi/.config/spyder')Reloaded modules: cuda_ndarray, cutils_ext, cuda_ndarray.cuda_ndarray, lazylinker_ext.lazylinker_ext, tmpxIZo3D.8caca893ab41ad4849afd885dc106b92, tmpihAji6.544270fe7a21a748315f83abfe0913cc, lazylinker_ext, tmpxIZo3D, tmpihAji6, cutils_ext.cutils_ext[GpuElemwise{exp,no_inplace}(), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]('Looping 1000 times took', 0.26572704315185547, 'seconds')('Result is', array([ 1.23178029, 1.61879349, 1.52278066, ..., 2.20771813,2.29967761, 1.62323296], dtype=float32))
Used the gpu