Getting started

EBLearn is easy to setup and use. To get you started, let us

  1. Install EBLearn
  2. Do a simple tutorial on EBLearn


  1. For a quick download and install of eblearn, follow instructions for your OS: Linux, Windows or Mac.
  2. Make sure everything works on your system by running the tester (See execution section of the installation instructions). All tests should pass for EBLearn to smoothly run on your system.


- Head over to our Google Groups page, where we can easily answer your questions

Overview Tutorials

This makes you familiar with the training/testing pipeline.

You can also check these demos while working with tutorials:

Beginner Tutorials

EBLearn structure

(No C++ programming needed)

By the end of these series of tutorials, you will learn how to build a classifier in EBLearn. That means that you can build face detectors, handwriting recognition systems, etc. The possibilities are unlimited . I recommend that you go through these tutorials in order, so that you do not need to come back a tutorial if you dont understand something.

Advanced Tutorials

Note: A general advice when things don't go as expected is to use the “_debug” version of the corresponding tool. For example, detect_debug rather than detect will give a good feedback of what is happening.

  1. To understand the core tensor library, have a look at the simple demo and the libidx tutorial.
  2. To understand the core learning library, have a look at the mnist demo and the libeblearn tutorial.
  3. If necessary, implement your own modules and feed them to the existing training and classification/detection tools.

The following steps do not involve any programming but rather mere modification of existing scripts and configuration files. All necessary tools are already coded, from dataset compiling to training and classification, detection or regression.


  1. To train a model, proceed similarly to the face demo for a classification/detection task or similarly to regression demo for regression:
    1. create your dataset using the dscompile, dsmerge and dssplit tools, or modify this script.
    2. design your network in a configuration file similar to mnist.conf.
    3. train your network with the train tool by calling 'train face.conf' or 'metarun face.conf' if your configuration file contains several possible configurations to be run concurrently.


To use your trained machine as a detector, use the detect tool by calling: 'detect face.conf' and make sure the trained network file is specified in the configuration along with an input source.

start.txt · Last modified: 2012/09/21 23:01 by soumith