EBLearn
http://eblearn.cs.nyu.edu:21991/
2013-03-11T15:48:42-04:00EBLearn
http://eblearn.cs.nyu.edu:21991/
http://eblearn.cs.nyu.edu:21991/lib/images/favicon.icotext/html2012-09-10T22:58:47-04:00about
http://eblearn.cs.nyu.edu:21991/doku.php?id=about&rev=1347332327&do=diff
This project was mainly written by Pierre Sermanet and Yann LeCun with optimizations, tools and cross-platform support added by Soumith Chintala. Some new features are added by Qianli Liao. It is maintained by Pierre Sermanet, Soumith Chintala and Qianli Liao
It implements research conducted at the Computational and Biological Learning Laboratory of New York University.text/html2012-03-03T18:09:14-04:00advanced_tutorials
http://eblearn.cs.nyu.edu:21991/doku.php?id=advanced_tutorials&rev=1330816154&do=diff
Advanced Tutorials(Getting your hands dirty by writing some C++)
These tutorials would take you through the general C++ usage of libidx and libeblearn so that you can use them in your own programs.
Libidx is a very powerful tensor library with no external dependencies other than cmake and g++ to compile it. Hence, you can use it as a tensor/matrix library instead of developing your own.text/html2012-04-09T18:39:07-04:00all_demos
http://eblearn.cs.nyu.edu:21991/doku.php?id=all_demos&rev=1334011147&do=diff
Demos
* Libidx: simple demo
* MNIST demo
* Face detector demo
* Regression demotext/html2012-04-09T18:40:12-04:00all_docs
http://eblearn.cs.nyu.edu:21991/doku.php?id=all_docs&rev=1334011212&do=diff
Documentation
* Libidx
* Libeblearn
* Libidxgui
* Libeblearngui
* Libeblearntoolstext/html2013-01-15T20:59:24-04:00all_tutorials
http://eblearn.cs.nyu.edu:21991/doku.php?id=all_tutorials&rev=1358301564&do=diff
Overview Tutorials
This makes you familiar with the training/testing pipeline.
* Training a state-of-the-art classifier on the SVHN dataset (64-bit only)
You can also check these demos while working with tutorials:
----------
Beginner Tutorials
(No C++ programming needed)text/html2012-11-03T13:24:36-04:00android
http://eblearn.cs.nyu.edu:21991/doku.php?id=android&rev=1351963476&do=diff
Android support has been added for libidx, libeblearn and libeblearntools.
Take a look at the eblearn/tools/mobile/android/eblearn/README.txt for a face detection demo.
In the android demo, all assets that are included in the .apk are given a .mp3 extension so that they are not compressed. However, if you look at eblearn.java, the assets are transfered to /sdcard/eblearn without their .mp3 extension.text/html2012-06-19T17:01:23-04:00answer
http://eblearn.cs.nyu.edu:21991/doku.php?id=answer&rev=1340139683&do=diff
answer_module
An answer_module serves 2 purposes:
* Transform the raw network outputs into the final answer. For example the class_answer outputs a 1-of-n discrete id rather than n continuous values.
* Interface a datasource class to feed a trainable network in a specific way. E.g., the class_answer transforms the sample's label from a discrete value into a 1-of-n target vector and feeds the sample and the target vector to the trainable network.text/html2012-10-05T23:11:18-04:00beginner_tutorial1_dscompile
http://eblearn.cs.nyu.edu:21991/doku.php?id=beginner_tutorial1_dscompile&rev=1349493078&do=diff
Welcome to the first tutorial. In this tutorial, you would start with a fundamental task. Building your dataset.
What is a dataset?
A dataset is a set of images that you give as input to your training algorithm. Each input image in your dataset is given a label(eg. face, background, bicycle, ball etc.). This helps the algorithm give a name to each type of entity that it learns.text/html2012-11-20T23:30:49-04:00beginner_tutorial2_train
http://eblearn.cs.nyu.edu:21991/doku.php?id=beginner_tutorial2_train&rev=1353472249&do=diff
Tutorial 2: Creating and training a simple digit classifier
This tutorial explains classification in details, however you can check the MNIST demo to get a general idea.
The Dataset
In this tutorial, we shall build a handwritten digit classifier with the datasets from the previous tutorial as the train/test data.
If you did not go through that tutorial or don't want to, the compiled dataset we are going to use in this tutorial can be downloaded here mnist_compiled_data.zip.text/html2013-01-09T15:29:23-04:00beginner_tutorial3_face_detector
http://eblearn.cs.nyu.edu:21991/doku.php?id=beginner_tutorial3_face_detector&rev=1357763363&do=diff
Now, let us create a practical face detector with a decent performance.
Creating the face detector would involve the following steps.
* Downloading the datasets
* Creating the datasets using dscompile utility
* Training the convnet using train utility
* Bootstrapping the convnet to reduce false-positives using detect utility
* Doing detection and seeing the results on test imagestext/html2012-03-05T21:23:34-04:00beginner_tutorials
http://eblearn.cs.nyu.edu:21991/doku.php?id=beginner_tutorials&rev=1331000614&do=diff
Beginner Tutorials(No C++ programming needed)
By the end of these series of tutorials, you will learn how to build a classifier in EBLearn. That means that you can build face detectors, handwriting recognition systems, etc. The possibilities are unlimited :-)
I recommend that you go through these tutorials in order, so that you do not need to come back a tutorial if you dont understand something.text/html2012-06-22T21:49:31-04:00bootstrapping
http://eblearn.cs.nyu.edu:21991/doku.php?id=bootstrapping&rev=1340416171&do=diff
Bootstrapping
Negatives extraction
# boostrapping
bootstrapping = 1 # enable bootstrapping extraction or not
bootstrapping_save = 1 # save bootstrapped samples directly as a dataset
display_bootstrapping = 1 # display bootstrapping extraction
# paths
gt_path = /xml_path/ # path to xml annotations
input_dir = /images_path/ # path of images to bootstrap on
output_dir = ${current_dir}# path to save outputs
# bootstrapping params
gt_pos_matching = .5 # positives have to…text/html2013-02-11T17:04:45-04:00classify
http://eblearn.cs.nyu.edu:21991/doku.php?id=classify&rev=1360620285&do=diff
Classify classifies inputs based on existing classifier weights and spits out the predicted class. (basically fprops through the network)
The sample can be 1d or 2d or 3d or whatever (whereas detect expects 2d or 3d inputs).text/html2012-02-28T18:53:21-04:00code
http://eblearn.cs.nyu.edu:21991/doku.php?id=code&rev=1330473201&do=diff
Code
* Download & Installation
* Browse code (svn)text/html2012-09-27T06:48:21-04:00coding
http://eblearn.cs.nyu.edu:21991/doku.php?id=coding&rev=1348742901&do=diff
Coding Guidelines for Developers
Those guidelines are intended for developers who contribute to eblearn, to provide coherent, easy to read and bug-free code.
Documentation
* Comment your code as much as possible so users can understand it.
* Use the commenting syntax that can be used to automatically generate the online documentation (with Doxygen):
* example: linear_module class documentation generated by Doxygen and its original input.
* syntax:
* use ! instead of for commen…text/html2012-11-13T17:44:07-04:00contact
http://eblearn.cs.nyu.edu:21991/doku.php?id=contact&rev=1352846647&do=diff
For bug reports, please report them at
http://code.google.com/p/eblearn/issues/
For feedback, questions and any other communication, you can contact
* Pierre Sermanet (firstname.lastname [at] gmail [dotcom])
* Soumith Chintala (firstname [at] gmail [dotcom])
* Qianli Liao (liao500km [at] gmail [dotcom])text/html2012-10-07T16:06:42-04:00dataset_conversion
http://eblearn.cs.nyu.edu:21991/doku.php?id=dataset_conversion&rev=1349640402&do=diff
Dataset conversion
How to save an existing static dataset into a dynamic dataset or vice versa
A static dataset is stored as a simple big matrix, which can take some time to load and fill up too much memory. A dynamic dataset remembers the offsets of each sample, allowing to load only one sample at the time, which is much more memory-efficient.text/html2011-11-01T19:57:15-04:00dataset_extraction
http://eblearn.cs.nyu.edu:21991/doku.php?id=dataset_extraction&rev=1320191835&do=diff
Dataset Files
Datasets used for training are single files containing all data (single-files datasets are easier and safer to handle than scattered files).
A dataset named 'mydata' is typically composed of the following files:
* mydata_data.mat: the input samples. This file is a single matrix of size NxLxCxHxW, with N the number of samples, L the number of “layers” of each sample, C the number of channels of each layer, H the height and W the width.
* mydata_labels.mat: the label values co…text/html2012-11-27T08:29:55-04:00debugging
http://eblearn.cs.nyu.edu:21991/doku.php?id=debugging&rev=1354022995&do=diff
High Level Debugging
Understanding why an architecture crashes or does not perform as expected may be difficult sometimes. Here are some common ways around this:
Debug mode
Use the “_debug” version of the corresponding tool. For example, detect_debug rather than detect will natively print a lot of information about what is happening, including what operations are performed with which input sizes and data ranges.
This allows to trace the progression of the input through the network, giving yo…text/html2012-02-28T18:40:18-04:00demos
http://eblearn.cs.nyu.edu:21991/doku.php?id=demos&rev=1330472418&do=diff
* Libidx: simple demo
* MNIST handwritten character classification
* Face detector demo
* Regression demotext/html2012-10-09T08:05:23-04:00detect
http://eblearn.cs.nyu.edu:21991/doku.php?id=detect&rev=1349784323&do=diff
detect finds objects in images at multiple scales and outputs corresponding bounding boxes given a trained model defined by a configuration file. Example uses of the detect tool can be found in the face detection demo or the MNIST demo.
To use detect, call:text/html2012-07-03T14:16:27-04:00dscompile
http://eblearn.cs.nyu.edu:21991/doku.php?id=dscompile&rev=1341339387&do=diff
by Pierre Sermanet (January 10th, 2011)
dscompile assembles preprocessed samples for training and testing purposes. It accepts a variety of input format, from simple directory structure to Xml files describing objects bounding boxes (e.g. PASCAL VOC format). See an example of sample extraction in this video.text/html2012-09-17T04:07:13-04:00ebl2matlab
http://eblearn.cs.nyu.edu:21991/doku.php?id=ebl2matlab&rev=1347869233&do=diff
This tools allows to convert eblearn/lush matrix files into Matlab matrix files.
Compilation
make ebl2matlab
Usage
./ebl2matlab <input: ebl matrix .mat> <outout: Matlab matrix name> <Matlab variable name>text/html2012-06-25T16:07:17-04:00face_detector
http://eblearn.cs.nyu.edu:21991/doku.php?id=face_detector&rev=1340654837&do=diff
Face detection
This face detection demo was trained using a cropped version of the Labeled Faces in the Wild face dataset.
Compiling
This demo uses the generic detection tool (see eblearn/tools/tools/src/detect.cpp).
See instructions to compile the 'detect' (multi-threaded) or 'stdetect' (single-threaded) project.text/html2012-08-21T13:17:51-04:00face_tutorial
http://eblearn.cs.nyu.edu:21991/doku.php?id=face_tutorial&rev=1345569471&do=diff
Training a Face detector using multiple datasets
Scroll to bottom for detailed outputs
In this tutorial, you will learn how to design, train and test a face detector using the YaleFacesB(Extended) and the LFW databases.
The model is based on Convolutional Networks (ConvNets) which learn all features from scratch rather than using hand-designed features.text/html2013-01-30T14:12:12-04:00home
http://eblearn.cs.nyu.edu:21991/doku.php?id=home&rev=1359573132&do=diff
----------
News
* 01/16/13: Released version 1.2 (Release Notes)
* Windows Binaries (x86 and x64) (Download)
* Source Package (All platforms) (eblearn_1.2_r2631-src.zip)
* 11/13/12: A bug tracker has been added on googlecode. Please report any bugs there.
* 11/03/12: Android Demo fixed and added conf and detection threads support for android (see demo)
* 09/21/12: Added a Google Groups page, where we can easily answer your questions
* 07/20/12: ICPR'12 paper published with new…text/html2013-03-10T14:17:43-04:00install
http://eblearn.cs.nyu.edu:21991/doku.php?id=install&rev=1362939463&do=diff
Instructions for: Linux, Windows or Mac OS
IDE instructions: Eclipse
Speeding up code using external libraries(SSE, IPP, OpenMP): Optimizations
----------
Download
* Sources via SVN:
svn co svn://svn.code.sf.net/p/eblearn/code/trunk/ eblearntext/html2012-11-03T13:28:16-04:00ios
http://eblearn.cs.nyu.edu:21991/doku.php?id=ios&rev=1351963696&do=diff
Eblearn compiles with XCode and can be (and has been) included in iOS apps.
No support will be provided for iOS.text/html2012-09-26T04:19:51-04:00libeblearn
http://eblearn.cs.nyu.edu:21991/doku.php?id=libeblearn&rev=1348647591&do=diff
This is being ported. Click: Libeblearn tutorialtext/html2012-06-15T16:29:10-04:00libidx
http://eblearn.cs.nyu.edu:21991/doku.php?id=libidx&rev=1339792150&do=diff
libidx tutorial
The idx library provides support for tensor description and manipulation. It is self-contained and the foundation for the eblearn learning library. In addition to generic tensor operations, it provides a set of operations specific to images, which are described as 3-dimensional tensors.text/html2011-07-11T19:23:42-04:00libidx_simple
http://eblearn.cs.nyu.edu:21991/doku.php?id=libidx_simple&rev=1310426622&do=diff
This very simple demos shows how to use libidx and libidxgui. We first load an image with load_image(), display it, then select the blue channel and multiply it by 0.5 and display the image again.
code
Makefile
Outputtext/html2012-06-27T18:15:13-04:00matshow
http://eblearn.cs.nyu.edu:21991/doku.php?id=matshow&rev=1340835313&do=diff
matshow
matshow can display any image type as well as .mat image files. It can also display weights contained in a network weights file (see '-conf' option). User can zoom, pan, show multiple images at once, display image informations and change display range (see controls section).text/html2013-02-27T16:53:06-04:00metarun
http://eblearn.cs.nyu.edu:21991/doku.php?id=metarun&rev=1362001986&do=diff
meta_comments = "#"
meta_max_jobs = 2 # limits the number of jobs running at the same time
meta_output_dir = /data/outputs/ # the root path for metarun outputs
meta_copy = "src/*" # copy files matching this pattern to job directory
meta_name=${name}${sz}_${machine}
meta_gnuplot_params="set term postscript enhanced color; set grid ytics;set ytics;set mytics;set grid mytics;set logscale y; set mxtics; set grid xtics; set pointsize 0.5; set key spacing .5;"
meta_gnuplot…text/html2012-06-24T21:13:38-04:00mnist
http://eblearn.cs.nyu.edu:21991/doku.php?id=mnist&rev=1340586818&do=diff
MNIST
In this demo, we show how to train a convolutional neural network to classify images of handwritten digits.
Training
The demo is entirely designed via the demos/mnist/mnist.conf configuration file.
Hence no coding is required to create it. The archicture is defined via the 'arch' variable.text/html2013-01-16T12:10:50-04:00optimizations
http://eblearn.cs.nyu.edu:21991/doku.php?id=optimizations&rev=1358356250&do=diff
Optimizations
EBLearn runs faster using some code optimizations provided by some external libraries.
* TH Tensor library: SSE Optimizations
* Intel IPP: float optimizations
* OpenMP: multi-core optimizations
* GPU (CUDA): CUDA Optimizations for convolutionstext/html2012-04-22T20:40:37-04:00publications
http://eblearn.cs.nyu.edu:21991/doku.php?id=publications&rev=1335141637&do=diff
In general, publications from the Computational and Biological Learning Lab and the VLG lab of New York University can be of interest.
Some publications related to EBLearn:
* Pierre Sermanet, Soumith Chintala and Yann LeCun: Convolutional Neural Networks Applied to House Numbers Digit Classification, ArXiv 2012.
* Pierre Sermanet and Yann LeCun: Traffic Sign Recognition with Multi-Scale Convolutional Networks, Proceedings of International Joint Conference on Neural Networks (IJCNN'11), 201…text/html2011-07-10T22:14:50-04:00regression
http://eblearn.cs.nyu.edu:21991/doku.php?id=regression&rev=1310350490&do=diff
Here is a regression example: tools/demos/regression/uci-isolet/isolet.conf
To train it, you have to:
* Download and extract the UCI-Isolet dataset from here: datasets.tgz
* Modify the 'root' variable in your isolet.conf to point to your UCI-Isolet directory.
* Call train isolet.conftext/html2013-01-17T12:33:40-04:00release_notes
http://eblearn.cs.nyu.edu:21991/doku.php?id=release_notes&rev=1358444020&do=diff
Release Notes
Release Notes for version 1.2text/html2013-01-17T12:33:11-04:00release_notes_1.2
http://eblearn.cs.nyu.edu:21991/doku.php?id=release_notes_1.2&rev=1358443991&do=diff
Demos
* Fixing and cleaned mnist.conf demo, added comments and l2pool. run_type was missing. Disabled training display crashing.
* Cleaned face demo, added comments. Fixed best_cam.conf for face detection demo (demos/face)
EBLearn and Idx Core Library
* Introduced a much simpler state mechanism
* Fixed memory leaks introduced by the much simpler state mechanism :)text/html2012-11-13T17:56:08-04:00sidebar
http://eblearn.cs.nyu.edu:21991/doku.php?id=sidebar&rev=1352847368&do=diff
* Get started
Code
* Download & Installation
* Browse code (svn)
Programming
* Tutorials
* Demos
* Documentation
* Tools
* Android
* iOS
* Coding Guidelines
Other
* Contact
* About
* Related Publications
* Sourceforge pagetext/html2012-09-21T23:01:02-04:00start
http://eblearn.cs.nyu.edu:21991/doku.php?id=start&rev=1348282862&do=diff
Getting started
EBLearn is easy to setup and use.
To get you started, let us
* Install EBLearn
* Do a simple tutorial on EBLearn
Installation
* For a quick download and install of eblearn, follow instructions for your OS: Linux, Windows or Mac.
* Make sure everything works on your system by running the tester (See execution section of the installation instructions). All tests should pass for EBLearn to smoothly run on your system.text/html2012-09-26T04:20:49-04:00structure
http://eblearn.cs.nyu.edu:21991/doku.php?id=structure&rev=1348647649&do=diff
The libraries are organized as follow. The header files (.h declarations and .hpp template implementations) are located in include/ directories and non-templated implementations (.cpp) in src/.
libidx
* idx.*: Tensor descriptors.
* idx<T>: a tensor descriptor class.
* midx<T>: a tensor of tensors class.text/html2013-01-15T20:58:54-04:00svhn_tutorial
http://eblearn.cs.nyu.edu:21991/doku.php?id=svhn_tutorial&rev=1358301534&do=diff
This tutorial only works with 64-bit versions of EBLearn, as the dataset used is larger than 4GB when uncompressed.
Training a state-of-the-art classifier on the SVHN dataset
In this tutorial, you will learn how to design, train and test a state-of-the-art classifier for the Stanford/Google Street View House Numbers dataset.text/html2012-02-28T18:35:13-04:00svn
http://eblearn.cs.nyu.edu:21991/doku.php?id=svn&rev=1330472113&do=diff
You can browse the source code at <http://eblearn.svn.sourceforge.net/viewvc/eblearn/trunk/>text/html2013-02-06T00:37:40-04:00tools
http://eblearn.cs.nyu.edu:21991/doku.php?id=tools&rev=1360129060&do=diff
Tools
The eblearn tools help to create datasets, train models and run them.
Their code is located in eblearn/tools/tools/src and binaries are built into eblearn/bin, and installed on the system with 'make install'.
They can all be compiled by calling 'make tool', or simply 'make'.
Most tools will show a brief help when called without arguments.text/html2012-12-27T11:31:38-04:00train
http://eblearn.cs.nyu.edu:21991/doku.php?id=train&rev=1356625898&do=diff
train
training_precision = double # double is recommended, but float is also possible
add_features_dimension = 1 # add a feature dimension of size 1 in front
test_display_modulo = 0 # modulo at which to display test results
show_configuration = 1 # prints entire configuration during initializationtext/html2012-06-29T14:32:29-04:00weights_permuting
http://eblearn.cs.nyu.edu:21991/doku.php?id=weights_permuting&rev=1340994749&do=diff
Permuting weights order
This is useful to load an existing trained network for which modules loading order has change because of framework changes.
To rearrange the order of loaded weights, first identify the blocks of weights that need to be permuted by looking at the network construction outputs and using the ”#params” number to indicate the end of each block. Then figure out the new order of these blocks and define the following variables in your configuration file: