The Journal of the Audio Engineering Society (JAES) just released a special issue on network audio. It features collaborative research work on low-latency audio processing by UC Berkeley’s Parallel Computing Laboratory entitled “A Multicore Operating System with QoS Guarantees for Network Audio Applications” :
While network-based mechanisms are important to enable deterministic transport of audio data from transmitter to receiver, there is an equally important role played by the operating systems that reside in audio devices of all sizes. The applications that receive and transmit audio are dependent on these operating systems to allocate processor and input/output resources. Authors Colmenares, Peters, Eads, Saxton, Jacquez, Kubiatowitz, and Wessel have presented Tessellation, an experimental operating system tailored to multicore processors, and have demonstrated how it enables network applications to meet their stringent time requirements.
The article can be found here. It will also appear in an upcoming textbook on parallel computing.
At the beginning of 2013, two important things have changed for me. I left Berkeley and academia and moved to San Diego, California to join Qualcomm R&D.
Here, I will continue doing applied research in acoustics, signal processing, and spatial audio technologies. Besides this, I am hoping to improve my surfing skills and to learn Spanish.
The forthcoming ACM Multimedia paper “Name That Room: Room Identification Using Acoustic Features in a Recording” I wrote together with Howard Lei and Gerald Friedland received a nice ICSI news feature this week.
Read the article here.
Similar to last year, I am teaching two modules at CNMAT’s MaxMSP Summer School.
The patches of today’s lecture on Software Testing in Max (which is based on this paper) can be found here.
On Friday, I will use CNAMT’s multichannel loudspeaker system to teach spatial audio using Jamoma.
Last week I attended a lecture by neuroscientist Vittorio Gallese entitled “What is so special with embodied simulation”. Among other things, I was really surprised to learn that the brain encodes positions of objects in space using egocentric as well as allocentric coordinate systems. Is that the neurological argument why SpatDIF supports more than just one coordinate system?
Bayes’ theorem (from wikipedia)
The new school year has started here at UCB a few weeks ago. After some mellow summer weeks, the campus is now crowded with students again.
Motivated by recent publications which apply machine learning tools to room acoustics research (such as Shabtai et al. on room volume classification based on room impulse responses), I decided to extend my machine learning knowledge and attend the courses Introduction to Machine Learning with Stuart Russell and Statistical Learning Theory with Michael Jordan and Martin Wainwright. I’m going to use this new knowledge in a spatial sound classification project at the end of this year.