include("header.php"); PrintHeader("Overview",true); ?>
Registration
• For registration, please follow the ICDAR2011 registration instructions for tutorials, which can be found here.
Date/Time of the tutorial: Sunday 18 September 2011 (morning)
Performance evaluation, based on objective measures and representative datasets, is crucial to making real progress in any field. In Document Image Analysis, a field with many different practical applications, it has frequently been the case that specific methods have been devised for different applications and evaluated in different ways on relatively small datasets specific to the target application. In Layout Analysis alone there is a large number of methods proposed for segmentation but it is not clear, faced with a particular type of document, which method is more applicable or how an existing method can be improved to be better suited to a given application.
Two crucial elements have to be realised in order to achieve objective performance evaluation, namely ground truth and evaluation methods. Both these elements have to be accurate and detailed, enabling the provision of in-depth information to developers and users/integrators. The production of ground truth is expensive (especially in large volumes, as required) and there are many decisions to be made in terms of representation and consistency. Similarly, it is an important requirement that evaluation methods are efficient as well as flexibly applicable to different evaluation scenarios.
This tutorial will cover the key issues in performance evaluation in the most widely researched but, at the same time, more difficult to assess areas of Document Image Analysis. Whilst the focus will be mostly on Layout Analysis and OCR, the evaluation of other areas such as binarisation and geometric correction will be mentioned. All aspects of performance evaluation will be examined, from collecting a representative sample to ground truthing to defining evaluation metrics and scenarios to interpreting the results.
The tutorial material will come from the presenters' extensive experience in research, implementation and practical application of performance evaluation and creation of ground-truthed datasets.
Participants will learn about the state of the art and gain valuable insights in the design, implementation and running of performance evaluation systems. Most importantly they will be given copies of software tools and will be guided through example ground-truthing and evaluation workflows.
In addition to the tutorial slides, software copies of ground-truthing and evaluation tools will be made available and used during the hands-on sessions.
include("footer.php"); ?>