You are here:

Searching Speech to Find Images

For audiovisual archives in the early 21st Century, one of the great challenges is how to keep up with and improve upon the documentation of ever-increasing amounts of materials. Daily broadcasts from radio and television and new productions in digital media come charging into the archive by multitudes. One of the ways we think A/V productions can be made 'Googleable' in an optimal manner is by making use of speech recognition: Automatically generated, time-labeled transcripts, can then be made searchable on the fragment level and serve as the basis for further contextualisation.

Twente Symposium

On Thursday, July 5th, Twente University organises its 6th SIKS/Twente Seminar on Searching and Ranking. The seminar fouses on the topic of Searching Speech and means to evaluate the state of speech recognition in context. When using automated sources, the amount of information that is generated can cause extra noise in the search result - the challenge is how to teach a system to filter out that noise to a bare minimum. Twente's researcher Laurens van der Werff, who will present his PhD research on noisy transcripts for spoken document retrieval. The symposium brings together researchers from companies and academia working on the evaluation of speech recognition techniques.

The symposium will take place at the campus of the University of Twente at the Citadel (building 9), lecture hall H327 (look here for travel information). The event is part of the Advanced Components Stage of the SIKS educational program. Especially students working in the field of Web based Systems and Data Management, Storage and Retrieval are strongly encouraged to participate. The symposium is organized by Franciska de Jong, Laurens van der Werff en Thijs Verschoor, members of the CHoral project team.

More Information