A few months after attending the Telluride Neuromorphic Cognition Engineering Workshop, where I was part of the auditory attention group, this video hit the web. I’m still amazed that we managed to develop this real-time system, probably the first of its kind.
Last week I attended a lecture by neuroscientist Vittorio Gallese entitled “What is so special with embodied simulation”. Among other things, I was really surprised to learn that the brain encodes positions of objects in space using egocentric as well as allocentric coordinate systems. Is that the neurological argument why SpatDIF supports more than just one coordinate system?