Category Archives: Perception

Objective Evaluation of Synthesised Environmental Sounds

Having recently attended DAFx, for my 4th year, I was presenting my paper on Objective Evaluation of Synthesised Environmental Sounds. WhatsApp Image 2018-09-06 at 15.00.44

 

The basic premis of the paper, is that we can computational measure how similar two sounds are using an objective metric. This objective metric can be evaluated using an iterative resynthesis approach. And a given similarity score can be evaluated through comparison to human perception.ObjectiveEvalSynthesis

 

 

I hope this made sense, but if not please get in touch and I would be happy to explain further. The paper will be available on the DAFx Website shortly.

Sound Synthesis – Are we there yet?

TL;DR. Yes

At the beginning of my PhD, I began to read the sound effect synthesis literature, and I quickly discovered that there was little to no standardisation or consistency in evaluation of sound effect synthesis models – particularly in relations to the sounds they produce. Surely one of the most important aspects of a synthetic system, is whether it can artifically produce a convincing replacement for what it is intended to synthesize. We could have the most intractable and relatable sound model in the world, but if it does not sound anything like it is intended to, then will any sound designers or end users ever use it?

There are many different methods for measuring how effective a sound synthesis model is. Jaffe proposed evaluating synthesis techniques for music based on ten criteria. However, only two of the ten criteria actually consider any sounds made by the synthesiser.

This is crazy! How can anyone know what synthesis method can produce a convincingly realistic sound?

So, we performed a formal evaluation study, where a range of different synthesis techniques where compared in a range of different situations. Some synthesis techniques are indistinguishable from a recorded sample, in a fixed medium environment. In short – Yes, we are there yet. There are sound synthesis methods that sound more realistic than high quality recorded samples. But there is clearly so much more work to be done…

For more information, read the paper here

DAFx Day 3

Day three of the Digital Audio Effects Conference (DAFx15) began with an excellent introduction and summary of Wave Digital filters and Digital Wave Guides by Kurt Werner and Julius O. Smith from CCRMA, in which the current state of the art in physical modelling no nonlinearities was presented and some potential avenues for future exploration was discussed. Following on from this work was discussed

DAFx Conference 2015

The DAFx conference began with a tutorial day, where Peter Svensson provided a fantastic summary of the State of the Art in sound field propagation modelling and virtual acoustics.

Slide from DAFx 15 Day 1

During lunch, as it was getting dark, the snow started, which unfortunately blocked our view on the Northern Lights that afternoon. Øyvind Brandtsegg & Trond Engum then discussed Cross adaptive digital audio effects and their creative use in live performance. He referenced existing work at Queen Mary as some of the state of the art in existing work, and then presented NUTU’s current work on Cross Adaptive Audio Effects. The workshop day was rounded off with Xavier Serra discussing the Audio Commons project and use of open audio content.

 

Upcoming Events

There are a range of interesting and exciting events that are upcoming in the field audio technology, including:

Listening in the Wild – A machine listening workshop hosted at Queen Mary University on the 25th of June. This will discuss how animals and machines can listen to complex soundscapes. More information here: http://www.eecs.qmul.ac.uk/events/view/listening-in-the-wild-animal-and-machine-hearing-in-multisource-environment

Intelligent Music Production – A workshop presented at Birmingham City University on the 8th September on the current state of the art in audio production technology, perception and future implications. Details are here: http://www.aes-uk.org/forthcoming-meetings/aes-midlands-workshop-on-intelligent-music-production/

Both of these events are free to attend, and promise to look very exciting indeed.

Visiting Researcher

Over the past month, I have been working closely with visiting researcher Luca Turchet [http://www.lucaturchet.it/].

We have been working on perceptual evaluation of synthesised footstep sounds. Within the experiment that we ran, participants put on shoes with sensors mounted in them. The sound of different floor surfaces and shoe types is then synthesised, through quality noise blocking headphones, and the participants are then asked to shape the spectral content with the aid of some very basic audio filters.

The intended outcome is to identify the extent to which different participants will vary the spectral characteristics of their footsteps.

Further updates on this research to follow.