Home

Program Schedule

Call for Papers

People

Invited Talks

Dates

Latest News

Contact Us

 

Important Date

 

¡¤         Full Paper [or Extended Abstract] Submission: September 20th 2015

¡¤         Notification of Acceptance: October 1st 2015

¡¤         Camera-Ready Paper Due: October 10th 2015

¡¤         Workshop: December 17th 2015

            * Selected Papers will be invited to a special issue of Machine Vision and Applications (MVA)

 

Submission Information

 

¡¤         The submission web site on CMT is now open. Please follow the same author guideline for ICCV 2015 submissions (http://pamitc.org/iccv15/author_guidelines.php ). Author kits are available here. In addition to full papers, extended abstract submissions are also welcome.

 

Invited Speakers / Panelists

 

¡¤         Dan Ellis, Columbia University

¡¤         Greg Leeming, Intel

¡¤         Ivan Dokmanic, EPFL/UIUC

¡¤         Jie Yang, NSF, US

¡¤         Ming Lin, University of North Carolina at Chapel Hill, US

¡¤         Radu Patrice Horaud, INRIA

¡¤         Ramesh Raskar, MIT

¡¤         Ravish Mehra, Oculus Research

¡¤         Zhengyou Zhang, Microsoft Research

       More will be announced soon.

 

Call for Papers

 

One of the driving factors for innovation in computer vision is the availability of new types of input sensors and algorithms to process the sensor data.  For the scope of this workshop, we aim to investigate (or reintroduce) another readily available, but often ignored source of information -- audio or acoustic sensors -- that can be combined with visual cameras, and thereby result in use of audio-visual sensors and a new generation of algorithms and applications.

There are two major thrusts for this workshop. The first is the multimodal analysis of videos with sound for enhanced recognition accuracy, including application areas such as audio-visual speech recognition, video categorization or classification, and event detection in videos, as well as technical areas such as early vs. late fusion and end-to-end training of models. The second is to explore the use of acoustic sensors to facilitate the reconstruction and understanding of 3D objects/models beyond the capability of current 3D RGBD sensors. This could include robust handling of scenes with specular/transparent objects, or even reconstruction around corners (i.e. non line-of-sight) and through obstacles or capturing other material characteristics (e.g. acoustic material properties for aural rendering). In this context, acoustic sensors refer to a broad frequency range of sound from subsonic to ultrasound.

We solicit papers in all areas that can benefit from the combined use of audio and visual signals. Sample topics include, but are not limited to:

¡×  Multimodal sensing with both visual and aural sensors

¡×  Abnormality detection

¡×  Audio¡©visual speech recognition

¡×  Video categorization or classification (with an audio component)

¡×  Audio¡©visual communications

We particularly encourage position and forward-thinking papers. All papers will be reviewed by the conference organizers.

 

 

Workshop Organizers

 

¡¤         Dinesh Manocha, University of North Carolina at Chapel Hill

¡¤         Marc Pollefeys, ETH-Zurich

¡¤         Rif A. Saurous, Google

¡¤         Rahul Sukthankar, Google

¡¤         Ruigang Yang, University of Kentucky