New papers

p>I am looking forward to SMC2012 in Copenhagen, Denmark, the 133rd AES Convention in San Francisco, USA, and the ACM Multimedia 2012 in Nara, Japan:

  • Peters N., Schacher J., Lossius T.: SpatDIF: Principles, Specification, and Examples, to appear in Proc. of the 9th Sound and Music Computing Conference (SMC), Copenhagen, Denmark, 2012.
  • Peters N., Lossius T., Place T.: An Automated Testing Suite for Computer Music Environments, to appear in Proc. of the 9th Sound and Music Computing Conference (SMC), Copenhagen, Denmark, 2012.
  • Peters N., Choi J., Lei H.: Matching artificial reverb settings to unknown room recordings: a recommendation system for reverb plugins, to appear at 133rd AES Convention, San Francisco, 2012.
  • Peters N., Lei. H., Friedland G.: Name That Room: Room identification using acoustic features in a recording, to appear at ACM Multimedia 2012, Nara, Japan, 2012.

New book on sound-field reprodution

Between The Ahnert and The Blauert, there is a new book in my library: The Ahrens:


From the description:

This book treats the topic of sound field synthesis with a focus on serving human listeners though the approach can be also exploited in other areas such as underwater acoustics or ultrasonics. The author derives a fundamental formulation based on standard integral equations and the single-layer potential approach is identified as a useful tool in order to derive a general solution. He also proposes extensions to the single-layer potential approach which allow for a derivation of solutions for non-enclosing distributions of secondary sources such as circular, planar, and linear ones. Based on above described formulation it is shown that the two established analytic approaches of Wave Field Synthesis and Near-field Compensated Higher Order Ambisonics constitute specific solutions to the general problem which are covered by the single-layer potential solution and its extensions. The consequences spatial discretization are analyzed in detail for all elementary geometries of secondary source distributions and applications such as the synthesis of the sound field of moving virtual sound sources, focused virtual sound sources, and virtual sound sources with complex radiation properties are discussed.

Another gem is the accompanied website, where Jens provides the Matlab source code for all Matlab figures used in this book – bravo Jens.

Exporting colored eps figures from Matlab

When you get stuck with Matlab, you find an answer to your problem usually in a reasonable amount of time – usually. Yesterday, I was really unlucky with the following.

My annoying problem was that (on a mac) color plots exported as eps, will be rendered with blurry edges. See left picture below. The right picture shows how my plot looks in Matlab and how I want it to be exported.

It was relatively easy to find people with a similar problem (here, or there) as well as an explanation for it: The culprit is the anti-aliasing rendering feature which is enabled in practically every eps viewer. So the problem is not Matlab itself, but rather how Matlab and the eps renderer are interacting. However, finding a working solution was hard. After one day of searching (and finally using Bing instead of Google search), I found a working solution in  this thread.

Here is the fix: In Matlab, in the Export setup under Rendering, you have to use the OpenGL renderer, not the standard painter. That’s it!

 

I prefer to save figures directly from the script where I also compose the plot in the first place. For doing this, just add the following line to your Matlab code:

plot(data);
set(gcf,'renderer','opengl'); % the magic line
saveas(gcf,'test.eps','epsc');

I’m happy to have solved the issue, but I am also starting to wonder about scientific plotting tool alternatives, such as python’s matplotlib or Gnuplot.

 

 

Coordinate systems and the brain

Last week I attended a lecture by neuroscientist Vittorio Gallese entitled “What is so special with embodied simulation”. Among other things, I was really surprised to learn that the brain encodes positions of objects in space using egocentric as well as allocentric coordinate systems. Is that the neurological argument why SpatDIF supports more than just one coordinate system?

ViMiC 64-bit update

In the last few days I did some maintenance work on the ViMiC max external, mainly to make it more efficient and to take advantage of max6′s 64-bit audio signals. I’m pretty pleased about the outcome – running my test patch, using 8 inputs and 5 outputs, the Vimic_lite method is now down to 18 % 14 % CPU including 1st order reflections. This update will be included in the next Jamoma release.

If you want to try it now, I’ve also updated my standalone demo app for spatializing Major Tom in a 5.0 ITU setup (mac only).

Back from Jamoma development workshop

I just got back from the heartlands where 74objects generously hosted the second Jamoma development workshop of this year.

The workshop focused on audio processing within Jamoma, i.e. the JamomaDSP library and the Jamoma Audio Graph. Often our workshop end with a lot of unfinished and also broken code due to conceptual changes in we think Jamoma should work. This time was different: we actually managed to significantly improve the performance  and  didn’t break anything on purpose. We rather dramatically improved the processing speed and memory cost of the Jamoma Audio Graph and made progress on the Spatialization library. Moreover, Jamoma is ready for 64-bit processing which will be supported with the upcoming Max6. (See the list of all changes here).

As a side note, it was interesting and a bit cumbersome to use an ipad for sketching ideas on how to improve the pulling mechanism of our audio graph. The sketches result in a kind of Jackson Pollack painting.

On Friday, the Kansas City Electronic Music and Arts Alliance (KcEMA) and the Kansas City Max User Group (MUG) invited us for a concert plus tech talk.

The ML semester

Bayes' theorem

Bayes’ theorem (from wikipedia)

The new school year has started here at UCB a few weeks ago. After some mellow summer weeks, the campus is now crowded with students again.

Motivated by recent publications which apply machine learning tools to room acoustics research (such as Shabtai et al. on room volume classification based on room impulse responses), I decided to extend my machine learning knowledge and attend the courses Introduction to Machine Learning with Stuart Russell and Statistical Learning Theory with Michael Jordan and Martin Wainwright. I’m going to use this new knowledge in a spatial sound classification project at the end of this year.