The International Workshop on Multiple Classifier Systems is a conference series whose goal is the development of the theories, algorithms, and applications of ensemble machine learning methods.  The series, which began in 2000, was spawned by the theoretical and empirical successes of ensemble methods especially in  the 1990s.  This workshop has served as a forum for researchers working in this area to spend some focused time presenting and discussing recent advances and issues.  The sixth workshop was held just outside of Monterey California a block from the Pacific Ocean.  The location encouraged interaction enabling, for example, a large ensemble to walk through the cool California evening to an excellent fish restaurant for informal discussions of multiple classifiers.

 

The workshop contained a mixture of theoretical presentations and presentations of applications.  The workshop began with an interesting presentation by Hillol Kargupta on orthogonal decision trees.  By treating decision trees as functions, his group has been able to create an orthogonal set of trees after some transformations resulting in high accuracy ensembles.  There were several interesting presentations about how multiple classifiers work.  In particular, analyses of variance reduction was focused on as part of this explanation.  There were presentations discussing the utilization of multiple classifiers in an incremental learning context where new classes could occur.  On the final day of the meeting, there were two talks on new research directions.  Fabio Roli made a nice presentation on semi supervised multiple classifiers systems in which both labeled and unlabeled data are used.  This is a niche which deserves exploration because of the well understood difficulties in getting as much labeled data as one would like.  Kagan Tumer discussed a system of agents which has an overall system-level objective where each individual agent has its own goal.  This is a potential multi-classifier system, but the potential autonomy of agents provides a different type of focus.

 

As is the tradition, there was a roundtable discussion led by Terry Windeatt and Philip Kegelmeyer.  Three areas, performance claims, design and data principles, and the future of the field were discussed in an interesting way.  Attendees submitted claims or predictions which were then discussed.  After discussion, a collective vote was taken resulting in a type of multi-classifier system prediction.  There was general agreement that diversity among classifiers was important and that multiple classifiers systems held promise for incremental learning.  There was some agreement that it may be possible to analyze a data set to determine the best type of multiple classifier system to apply.  There was a general consensus that there is yet room for improving multiple classifiers systems.

 

The talks and discussions at the workshop allowed attendees to have an in-depth view of work that has been recently done in the field of ensemble learning or multiple classifier systems.  They also suggested new threads of research to follow and established threads that require more work.  Details on the next MCS will be available at: www.diee.unica.it/mcs.  These focused workshops are highly valuable for people interested in multi-classifier systems and the next one will continue the tradition.

Workshop ReportMCS2005

 

Workshop Chairs:

Nikunj C. Oza (NASA Ames Research Center, USA)
Robi Polikar (Rowan University, USA)
Josef Kittler (University of Surrey, UK)
Fabio Roli (University of Cagliari, Italy)

Text Box: Sixth International Workshop on Multiple Classifier Systems 

13-15 June, 2005, Seaside, CA, USA
Report prepared by:  Larry Hall
Click here for Top of Page

For more information on the MCS Workshop Series see:

www.diee.unica.it/mcs

Proceedings from MCS2005 are available in the Springer Lecture Notes in Computer Science Series, Volume 3541

Right Arrow: Next
Right Arrow: Previous