ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Machine Learning & Auditory Processing Position at TU Berlin

Country/Region : Germany

Website : http://www.ni.tu-berlin.de

Description

Technische Universitaet Berlin, department of Neural Information Processing, offers a position in an international collaborative project set in auditory processing and modeling, robotics and machine learning. The successful candidate will develop and apply machine learning and signal processing techniques in an active computational model of auditory perception in order to detect and annotate acoustic events in an auditory scene analysis and an quality of experience setting. Project and position are part of the TWO!EARS project which is funded through the EU FET Open scheme (see brief description below).
Starting date: Immediate
Salary level: E-13 TV-L
The position is until the end of 2016.
Candidates should hold a recent PhD-degree or Diplom-/Master- degree, should have excellent programming skills in an object-oriented language (MATLAB will be used) and good knowledge in the machine learning field. Research experience in machine learning or applications to auditory processing is an asset.
Application material (CV, list of publications, abstract of PhD thesis (if applicable), abstract of Diplom-/Master/Thesis, copies of certificates and two letters of reference -- but those can also be handed in later) should be sent to:
Prof. Dr. Klaus Obermayer
MAR 5-6, Technische Universitaet Berlin, Marchstrasse 23
10587 Berlin, Germany
http://www.ni.tu-berlin.de/
email: oby-AT-cs.tu-berlin.de
preferably by email.
TUB seeks to increase the proportion of women and particularly encourages women to apply. Women will be preferred given equal qualification.
Disabled persons will be preferred given equal qualification.
---
Consortium Summary:
TWO!EARS replaces current thinking about auditory modeling by a systemic approach in which human listeners are regarded as multi-modal agents that develop their concept of the world by exploratory interaction. The goal of the project is to develop an intelligent, active computational model of auditory perception and experience in a multi-modal context. Our novel approach is based on a structural link from binaural perception to judgment and action, realized by interleaved signal-driven (bottom-up) and hypothesis-driven (top-down) processing within an innovative expert system architecture. The system achieves object formation based on Gestalt principles, meaning assignment, knowledge acquisition and representation, learning, logic-based reasoning and reference-based judgment. More specifically, the system assigns meaning to acoustic events by combining signal- and symbol-based processing in a joint model structure, integrated with proprioceptive and visual percepts. It is therefore able to describe an acoustic scene in much the same way that a human listener can, in terms of the sensations that sounds evoke (e.g. loudness, timbre, spatial extent) and their semantics (e.g. whether the sound is unexpected or a familiar voice). Our system will be implemented on a robotic platform, which will actively parse its physical environment, orientate itself and move its sensors in a humanoid manner. The system has an open architecture, so that it can easily be modified or extended. This is crucial, since the cognitive functions to be modeled are domain and application specific. TWO!EARS will have significant impact on future development of ICT wherever knowledge and control of aural experience is relevant. It will also benefit research in related areas such as biology, medicine and sensory and cognitive psychology.

Last modified: 2015-05-25 21:45:39