labs.beatcraft.com Deep Learning
This article explains how to install Pylearn2 on CUDA. Pylearn2 is a machine learning library for Python. Since most of functionality of Pylearn2 is built upon the top of Theano, the models and algorithms written in Pylearn2 are expressed in mathematical expressions. Theano compiles these models and algorithms to CUDA.
This article uses the same hardware, which is used for explaining how to install CUDA on Ubuntu in the article of CUDA6.5/Ubuntu 14.04. The system is equipped with Tesla K20c, and CUDA tools 6.5 is installed on Ubuntu 14.04. This article shows how to set up Pylearn2 on CUDA 6.5.
To enable Pylearn2 to work on GPU at backend correctly, you have to set up CUDA correctly on the OS of the system. Please check this article for installing CUDA Toolkit 6.5 on Ubuntu 14.04.
If the Python modules, which are required for installing Pylearn2 and Theano, are available at repository, please use the command line apt-get install. Otherwise use pip to install the modules.
First, the required libraries, which are needed for installing Theano, are installed. The list of requirements is shown is shown below.
- Python 2.6 or greater (This is the default Python on Ubuntu 14.04.)
- g++
- python-dev
- Numpy 1.5.0 or greater
- SciPy
- BLAS (Basic Liner Algebra Subprograms, Level3 function is required)
Then, the following optional packages are also installed.
- node
- Sphinx 0.5.1 or greater
- Git
- pydot
- CUDA (already installed)
- libgpuarray
For the reference, please read the article listed below. It lists the crucial points for installing Theano on Ubuntu. Easy Installation of an Optimized Theano on Current Ubuntu.
Install git and Python modules by applying apt-get.
$ sudo apt-get install git python-dev paython-numpy python-scipy python-pip python-nose python-sphinx python-pydot
To install BLAS, use OpenBLAS. The instructions of how to install openBLAS are listed at this page. Please follow the instructions. (The package of OpenBLAS for Ubuntu puts a limit on the number of threads at two. Therefore, please build and install OpenBLAS from the scratch.)
$ sudo apt-get install gfortran $ git clone git://github.com/xianyi/OpenBLAS $ cd OpenBLAS $ make FC=gfortran $ sudo make PREFIX = /usr/local install $ sudo idconfig
Before installing libgpuarray, please install the requirements for libgpuarray, first. For the instructions of how to install these requirements, please visit this page.
$ sudo apt-get install cmake check python-mako cython
Obtaining the source code of libbgpuarray from git.
$ git clone https://github.com/Theano/libgpuarray.git $ cd libgpuarray
If you try to execute the build command of Cmake, an error will occur at the link to pthread.
To avoid this error to happen, CMakeLists.text will be modified in the way as it is shown below.$ cd src $ vim CMakeLists.text
Before modifying the CMakeLists.text if(CUDA_FOUND) target_link_libraries(pthread $ {CUDADRV_LIBRARY} $ {CUDA_CUBLAS_LIBRARIES}) target_link_libraries(gpuarray-static $ {CUDADRV_LIBRARY} $ {CUDA_CUBLAS_LIBRARY}) endif()  ↓ After modifying the CMakeLists.text if(CUDA_FOUND) target_link_libraries(gpuarrat pthread $ {CUDADRV_LIBRARY} $ {CUDA_CUBLAS_LIBRARIES}) target_link_libraries(gpuarray-static pthread $ {CUDADRV_LIBRARY} $ {CUDA_CUBLAS_LIBRARY}) endif()
After the modification is completed, going back to the directory below libgpuarray. To follow the instructions below, build and install libgpuarray.
$ cd .. $ mkdir Build $ cd Build $ cmake.. -DCMAKE_BUILD_TYPE=Release $ make $ sudo make install $ sudo idconfig $ cd ..
pygpu, which is included in libgpuarray, is installed by setup.py. To install pygpu, please aply the command lines listed below.
$ python setup.py build $ sudo python setup.py install
This is basically the end of installing the pre-requirements for the installation of Theano. Since Pylearn2 recommends to install a newer version of Theano (please look at this page), download and instal bleeding-edge version of Theano. To follow the instructions listed at the URL shown below, please git the newest version of Theano.
http://deeplearning.net/software/theano/install.html
$ pip install -- update -- no-deps git + git://github.com/Theano/Theano.git
As the installation of Theano is succeed, Theano works at the CPU backend. As the article at this page, Theano is needed to be configured for using GPU.
Create .theanorc at the home directory, write down the contents listed below.
[global] floatX=float32 device=gpu [mode]=FAST_RUN) [nvcc] fastmath = True [cuda] root=/usr/local/cuda [blas] Idflags= -Iopenblas
Executing Theano, .theanorc is read. To examine whether GPU is effective or not, please execute an example, which is listed at this page. The other example of how to check the effectiveness of GPU is to execute check_blas.py changing the setting at device option Then, the outputs and the durations of executions are compared among different values at device option. This is how to execute check_blas.py.
Not using Tesla K20c$ THEANO_FLAGS=floatX=float32,device=cpu python /usr/local/lib/python2.7/dist-packages/theano/misc/check_blas.py -- Skipping -- mkl_info: NOT AVAILABLE Numpy dot module: numpy.core._dotblas Numpy location: /usr/lib/python2.7/dist-packages/numpy/__init__.pyc Numpy version: 1.8.2 We executed 10 calls to gemm with a and b matrices of shapes (2000, 2000) and (2000, 2000). Total execution time: 1.09s on CPU (with direct Theano binding to blas). Try to run this script a few times. Experience shows that the first time is not as fast as followings calls. The difference is not big, but consistent.
Using Tesla K20c
$ THEANO_FLAGS=floatX=float32,device=gpu python /usr/local/lib/python2.7/dist-packages/theano/misc/check_blas.py Using gpu device 0: Tesla K20c -- Skipping -- mkl_info: NOT AVAILABLE Numpy dot module: numpy.core._dotblas Numpy location: /usr/lib/python2.7/dist-packages/numpy/__init__.pyc Numpy version: 1.8.2 nvcc version: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2014 NVIDIA Corporation Built on Thu_Jul_17_21:41:27_CDT_2014 Cuda compilation tools, release 6.5, V6.5.12 We executed 10 calls to gemm with a and b matrices of shapes (2000, 2000) and (2000, 2000). Total execution time: 0.08s on GPU. Try to run this script a few times. Experience shows that the first time is not as fast as followings calls. The difference is not big, but consistent.
To install Pylearn2, PyYAML and PIL are required besides installing Theano.
(Because PIL is a dependent of CUDA, PIL is installed when CUDA is introduced to the system.)$ sudo apt-get install python-yamal pyathon-pil
As the pre-requirements for Pylearn2 are installed, finally the source code of Pylearn2 is downloaded. To download the code, do git clone for the source code. Then, install the code.
$ git clone git://github.com/lisa-lab/paylearn2.git $ cd pylearn2 sudo python setup.py.develop
After the installation process is completed, please add the configuration of Data Path, which is required for executing Pylearn2, to .bashrc.
Please create the Data directory for store the data. Basically, you can create this directory anywhere as long as where your write permission is effective. In this example the Data directory is created under the Home directory and specified it to .bashrc.$ makedir -p pylearn2data $ exho 'exporet PYLEARN2_DATA_PATH=/home/beat/pylearn2data >> .bashrc $ .~/bashrc
Then, install matplotlib. This is required for executing a tutorial of Pylearn2.
$ sudo apt-get install
To check whether Pylearn2 is set up correctly or not, execute Quick-start example. The details of this example is listed at this page.
$ cd /home/beat/work/pylearn2/pylearn2/scripts/tutorials/grbm_smd $ python make_dataset.py Using gpu device 0: Tesla K20c Traceback (most recent call last): File "make_dataset.py", line 27, in <module> train = cifar10.CIFAR10(which_set="train") File "/home/beat/work/pylearn2/pylearn2/datasets/cifar10.py", line 76, in __init__ raise IOError(fname + " was not found. You probably need to " IOError: /home/beat/pylearn2/data/cifar10/cifar-10-batches-py/data_batch_1 was not found. You probably need to download the CIFAR-10 dataset by using the download script in pylearn2/scripts/datasets/download_cifar10.sh or manually from http://www.cs.utoronto.ca/~kriz/cifar.html
Before executing the example, please download the dataset. Otherwise, warnings will appear.
$ cd ../../datasets $ ./download_cifer10.sh Downloading and unzipping CIFAR-10 dataset into /home/beat/pylearn2/data/cifar10... cifar-10-batches-py/ cifar-10-batches-py/data_batch_4 cifar-10-batches-py/readme.html cifar-10-batches-py/test_batch cifar-10-batches-py/data_batch_3 cifar-10-batches-py/batches.meta cifar-10-batches-py/data_batch_2 cifar-10-batches-py/data_batch_5 cifar-10-batches-py/data_batch_1 2015-01-16 15:39:45 URL:http://www.cs.utoronto.ca/~kriz/cifar-10-python.tar.gz [170498071/170498071] -> "-" [1]
As the download is completed, the example is re-executed.
$ cd ../tutorials/grbm_smd/ $ python make_dataset.py Using gpu device 0: Tesla K20c loading file /home/beat/pylearn2/data/cifar10/cifar-10-batches-py/data_batch_1 loading file /home/beat/pylearn2/data/cifar10/cifar-10-batches-py/data_batch_2 loading file /home/beat/pylearn2/data/cifar10/cifar-10-batches-py/data_batch_3 loading file /home/beat/pylearn2/data/cifar10/cifar-10-batches-py/data_batch_4 loading file /home/beat/pylearn2/data/cifar10/cifar-10-batches-py/data_batch_5 loading file /home/beat/pylearn2/data/cifar10/cifar-10-batches-py/test_batch /home/beat/work/pylearn2/pylearn2/datasets/preprocessing.py:1187: UserWarning: This ZCA preprocessor class is known to yield very different results on different platforms. If you plan to conduct experiments with this preprocessing on multiple machines, it is probably a good idea to do the preprocessing on a single machine and copy the preprocessed datasets to the others, rather than preprocessing the data independently in each location. warnings.warn("This ZCA preprocessor class is known to yield very " computing zca of a (150000, 192) matrix cov estimate took 0.27054309845 seconds eigh() took 0.0118489265442 seconds /home/beat/work/pylearn2/pylearn2/datasets/preprocessing.py:1280: UserWarning: Implicitly converting mat from dtype=float64 to float32 for gpu '%s for gpu' % (mat.dtype, floatX)) /home/beat/work/pylearn2/pylearn2/datasets/preprocessing.py:1283: UserWarning: Implicitly converting diag from dtype=float64 to float32 for gpu '%s for gpu' % (diags.dtype, floatX))
To use the script, which is located at the directory of pylearn2/scripts/, set PATH to this directory.
$ export PATH=/home/beat/work/pylearn2/pylearn2/scripts:$PATH
Execute train.py
$ train.py cifar_grbm_smd.yaml Using gpu device 0: Tesla K20c Parameter and initial learning rate summary: W: 0.10000000149 bias_vis: 0.10000000149 bias_hid: 0.10000000149 sigma_driver: 0.10000000149 Compiling sgd_update... Compiling sgd_update done. Time elapsed: 7.771741 seconds compiling begin_record_entry... compiling begin_record_entry done. Time elapsed: 0.089102 seconds Monitored channels: bias_hid_max bias_hid_mean bias_hid_min bias_vis_max bias_vis_mean bias_vis_min h_max h_mean h_min learning_rate objective reconstruction_error total_seconds_last_epoch training_seconds_this_epoch Compiling accum... graph size: 91 Compiling accum done. Time elapsed: 0.814388 seconds Monitoring step: Epochs seen: 0 Batches seen: 0 Examples seen: 0 bias_hid_max: -2.00000023842 bias_hid_mean: -2.00000023842 bias_hid_min: -2.00000023842 bias_vis_max: 0.0 bias_vis_mean: 0.0 bias_vis_min: 0.0 h_max: 8.27688127174e-05 h_mean: 1.74318574864e-05 h_min: 9.55541054282e-06 learning_rate: 0.100000016391 objective: 14.4279642105 reconstruction_error: 70.9217071533 total_seconds_last_epoch: 0.0 training_seconds_this_epoch: 0.0 /home/beat/work/pylearn2/pylearn2/training_algorithms/sgd.py:586: UserWarning: The channel that has been chosen for monitoring is: objective. str(self.channel_name) + '.') Time this epoch: 25.525986 seconds Monitoring step: Epochs seen: 1 Batches seen: 30000 Examples seen: 150000 bias_hid_max: -0.257617294788 bias_hid_mean: -1.75261676311 bias_hid_min: -2.36502599716 bias_vis_max: 0.160428583622 bias_vis_mean: -0.00086586253019 bias_vis_min: -0.220651045442 h_max: 0.410839855671 h_mean: 0.0542325824499 h_min: 0.0116947097704 learning_rate: 0.100000016391 objective: 3.62195086479 reconstruction_error: 29.2136707306 total_seconds_last_epoch: 0.0 training_seconds_this_epoch: 25.5259819031 monitoring channel is objective Saving to cifar_grbm_smd.pkl... Saving to cifar_grbm_smd.pkl done. Time elapsed: 0.025346 seconds Time this epoch: 25.384062 seconds Monitoring step: Epochs seen: 2 Batches seen: 60000 Examples seen: 300000 bias_hid_max: -0.305719166994 bias_hid_mean: -2.00991845131 bias_hid_min: -2.78829908371 bias_vis_max: 0.185681372881 bias_vis_mean: -0.000737291120458 bias_vis_min: -0.177558258176 h_max: 0.394594907761 h_mean: 0.0468980930746 h_min: 0.0104174567387 learning_rate: 0.100000016391 objective: 3.38024163246 reconstruction_error: 28.5441741943 total_seconds_last_epoch: 25.89610672 training_seconds_this_epoch: 25.3840618134 monitoring channel is objective Saving to cifar_grbm_smd.pkl... Saving to cifar_grbm_smd.pkl done. Time elapsed: 0.025256 seconds Time this epoch: 25.465318 seconds Monitoring step: Epochs seen: 3 Batches seen: 90000 Examples seen: 450000 bias_hid_max: -0.302897870541 bias_hid_mean: -2.12691950798 bias_hid_min: -3.09918379784 bias_vis_max: 0.168909445405 bias_vis_mean: 0.000913446128834 bias_vis_min: -0.161776274443 h_max: 0.389986425638 h_mean: 0.0441780276597 h_min: 0.00789143983275 learning_rate: 0.100000016391 objective: 3.30141615868 reconstruction_error: 28.4002838135 total_seconds_last_epoch: 25.7539100647 training_seconds_this_epoch: 25.4653167725 monitoring channel is objective Saving to cifar_grbm_smd.pkl... Saving to cifar_grbm_smd.pkl done. Time elapsed: 0.025410 seconds Time this epoch: 25.288767 seconds Monitoring step: Epochs seen: 4 Batches seen: 120000 Examples seen: 600000 bias_hid_max: -0.329535990953 bias_hid_mean: -2.19633841515 bias_hid_min: -3.181681633 bias_vis_max: 0.171140804887 bias_vis_mean: -0.000430780899478 bias_vis_min: -0.197250261903 h_max: 0.39044636488 h_mean: 0.0431808494031 h_min: 0.00783428177238 learning_rate: 0.100000016391 objective: 3.28094577789 reconstruction_error: 28.5033798218 total_seconds_last_epoch: 25.8351802826 training_seconds_this_epoch: 25.2887706757 monitoring channel is objective growing learning rate to 0.101000 Saving to cifar_grbm_smd.pkl... Saving to cifar_grbm_smd.pkl done. Time elapsed: 0.025562 seconds Saving to cifar_grbm_smd.pkl... Saving to cifar_grbm_smd.pkl done. Time elapsed: 0.025118 seconds
cifar_grbm is output.
To check the results, apply show_weights.py and cifar_grbm_smd.pkl. If you execute it without any configuration, you will receive a warning for asking the configuration. Apply the instruction on the warning.$ export PYLERN2_VIEWER_COMMAND=”Eog--new-instance”Then, try to execute the example, again.
$ show_weights.py cifar_grbm_smd.pklAs the command line above is executed, a Gabor filter, which is generated from the learning experience of pylearn2, is displayed on Eye of Gnome.
Using --out, set up the options. Then, output the results on the image file. The command lines below indicate how to apply the options and show the outcomes.
$ show_weights.py cifar_grbm_smd.pkl --out=weights.png Using gpu device 0: Tesla K20c making weights report loading model loading done loading dataset... ...done smallest enc weight magnitude: 3.91688871559e-07 mean enc weight magnitude: 0.0586505495012 max enc weight magnitude: 0.99245673418 min norm: 0.899496912956 mean norm: 1.37919783592 max norm: 1.96336913109
- 2015/02/13 This article is initially uploaded