Supervision: Sami Karkar,Gilles André Courtois
Project type: Semester project (bachelor)
Finished
The rendering of 3D audio environments can be achieved by many means, such as 2D or 3D loudspeaker arrays, or through headphones (binaural rendering). Depending on the rendering hardware, the spatialization of sound is made possible by the application of more or less advanced signal processing techniques (phase / amplitude pan-potting,Head-Related Transfer Functions, ambisonics, wave field synthesis,...).
The signal processing capabilities available in new communication devices such as smartphones and tablets, allow the user to listen to realistic 3D sound environments, together with the possibility of providing an ideal support for displaying the audio scene (real-time assignement of source/listener positions on the tablet display). The project will consist in developping such an application allowing the real-time rendering of 3D audio scenes that can be modified through tactile interaction of the user.
The project can be divided into the following tasks:
- Implementation of binaural rendering of 3d audio scenes on headphones (implementation HRTF)
- Development of the interactive display
Nature of work: theory (50%), programming (50%)
Requirements: Audio Engineering (MA1) course (optional), Matlab or MaxMSP or VST programming (C++ or Java or .NET).