Category Archives: Intelligent Mixing

DAFx Day 3

Day three of the Digital Audio Effects Conference (DAFx15) began with an excellent introduction and summary of Wave Digital filters and Digital Wave Guides by Kurt Werner and Julius O. Smith from CCRMA, in which the current state of the art in physical modelling no nonlinearities was presented and some potential avenues for future exploration was discussed. Following on from this work was discussed

DAFx Day 2

Photo from http://www.ntnu.edu/web/dafx15/dafx15 Photo by Jørn Adde © Trondheim kommune

Day two of DAFx conference in Trondheim NTNU opened with Marije Baalmans keynote on the range of hardware and software audio effects and synthesisers are available to artists, and how different artists utilise these effects. This talk was focused primarily on small embedded systems that artists use, such as Arduino, Beaglebone Black and Raspberry Pi. Later in the day, some excellent work including:

DAFx Conference 2015

The DAFx conference began with a tutorial day, where Peter Svensson provided a fantastic summary of the State of the Art in sound field propagation modelling and virtual acoustics.

Slide from DAFx 15 Day 1

During lunch, as it was getting dark, the snow started, which unfortunately blocked our view on the Northern Lights that afternoon. Øyvind Brandtsegg & Trond Engum then discussed Cross adaptive digital audio effects and their creative use in live performance. He referenced existing work at Queen Mary as some of the state of the art in existing work, and then presented NUTU’s current work on Cross Adaptive Audio Effects. The workshop day was rounded off with Xavier Serra discussing the Audio Commons project and use of open audio content.

 

The 139th Convention of the Audio Engineering Society in New York City

The weekend saw the 139th Convention of the Audio Engineering Society in Javits Convention Center in New York City. The annual American AES Convention is the world’s main event for all things audio, spanning a wide range of topics including loudspeaker design, music production, hearing aids, game audio and perception, and featuring a huge trade show as opposed to its less industry-heavy annual European counterparts.

A handful of C4DM delegates (Joshua D. Reiss, György Fazekas, Thomas Wilmering, David Moffat, David Ronan, and Brecht De Man) were each involved in multiple sessions.

Papers

T. Wilmering, G. Fazekas, Alo Allik and Mark B. Sandler, “Audio Effects Data on the Semantic Web” [Download paper]

D. Ronan, B. De Man, H. Gunes and J. D. Reiss, “The Impact of Subgrouping Practices on the Perception of Multitrack Music Mixes” [Download paper]

Dave Ronan also presented at the Student Design Exhibition with a physical model of a sitar based on a dynamic delay line and the Karplus-Strong model.

Workshops and tutorials

Workshop W20: “Perceptual Evaluation of High Resolution Audio” (Joshua D. Reiss (chair), Bob Katz, George Massenburg and Bob Schulein)

Tutorial T21: “Advances in Semantic Audio and Intelligent Music Production” (Ryan Stables (chair), Joshua D. Reiss, Brecht De Man and Thomas Wilmering)

Workshop W26: “Application of Semantic Audio Analysis to the Music Production Workflow” (György Fazekas (co-chair), Ryan Stables (co-chair), Jay LeBoeuf and Bryan Pardo)

Other events

Brecht De Man and Dave Moffat were responsible for the organisation of the entire Student and Career Development track as the Chair and Vice Chair of the Student Delegate Assembly (Europe and International Regions). These events include a student party (this edition at NYU’s James L. Dolan’s Music Recording Studio), Student Recording Competition, Student Design Competition, and a very successful edition of the Education and Career Fair.

Dave Ronan represented Queen Mary at the latter, discussing the various taught and research courses with an emphasis on the new MSc in Sound and Music Computing and handing out a lot of QM swag.

Committees

High Resolution Audio Technical Committee: Josh

Semantic Audio Analysis Technical Committee: György and Thomas

Education Committee: Dave Moffat and Brecht

Josh also serves as a member of the Board of Governors of the AES.


Upcoming AES events with a C4DM presence

AES UK Analogue Compression – Theory and Practice at British Grove Studios, London, UK (12 November 2015) Members only
Organised by Brecht and 2014-2015 MSc student Charlie Slee

AES UK Audio Signal Processing with E-Textiles at Anglia Rusking University, Cambridge, UK (26 November 2015)
By Becky Stewart (PhD graduate and visiting lecturer)

60th Conference on Dereverberation and Reverberation of Audio, Music, and Speech (DREAMS in Leuven, Belgium (3-5 February 2015)
Several C4DM papers including
David Moffat and Joshua D. Reiss. “Dereverberation and its application to the blind source separation problem”. In Proc. Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society, February 2016.

61st Conference on Audio for Games in London, UK (10-12 February 2015)
Brecht and Dave on committee, C4DM papers submitted

140th Convention of the Audio Engineering Society in Paris, France (4-7 June 2016)
If you are attending as a student (undergraduate, master, PhD), please get in touch with Brecht or Dave, and consider submitting a project to the Student Design Competition or Student Recording Competition to receive feedback from industry experts and prizes.


For any questions about the Audio Engineering Society regarding e.g. membership, publications, and local events, please contact Brecht (Chair of the Student Delegate Assembly, Chair of the London UK Student Section, and Committee Member of the British Section) or Dave (Vice Chair of the Student Delegate Assembly).

Dereverberation

I have been accepted to publish my MSc project on Dereverberation applied to Microphone Bleed Reduction.

I implemented existing research in reverb removal and combined with with a method for microphone interference reduction. In any multiple source environment there will interference from opposing microphones as pictured below.2s2m

Research at Queen Mary University allows this interference to be reduced in real time processing and my project was to improve this with the addition of removing natural acoustic reverberation in real time, to assist with the microphone bleed reduction.

This work will be published at the AES conference on DREAMS (Dereverberation and Reverberation of Audio Music and Speech).

David Moffat and Joshua D. Reiss. “Dereverberation and its application to the blind source separation problem”. In Proc. Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society, 2016. to appear.

AES on Intelligent Music Production

Yesterday, the AES presented a workshop on Intelligent Music Production. The day started with a great discussion of the current state of the art of Intelligent Music Production, with strong indications as to where the future of the research will occur, provided by Josh Reiss. Hyunkook Lee presented some interesting work on 3D placement of sources in a mix, and how to separate tracks based on the perceived inherent height of different frequency bands. Brecht De Man discussed his PhD work on subjective evaluation of music mixing, and his path to understand how people go about producing their preferred mix of music, and how this is perceived by others.

Following this, Sean Enderby provided an energetic talk on a set of SAFE tools produced at BCU for attaching semantic terms to presets for a range of audio effect plugins. Alessandro Palladini from Music Group UK, presented their current work on “Smart Audio Effects for Live Audio Mixing” which included interesting work on multiple side chained and parameter reduced effects and new methods and tools to provide mix engineers, both in the studio and in live music scenarios. Their research is focused around providing an intuitive set of tools that remain perceptually relevant. Alex Wilson presented his work on how participants mix a song in a very simplified mix simulation and how the starting positions will impact the final mix that participants will produce.

Videos of all the presentations is available here: http://www.semanticaudio.co.uk/media/

AES Workshop on Intelligent Music Production

The 8th September 2015 sees the Audio Engineering Society UK Midlands Section presenting a workshop on Intelligent Music Production at Birmingham City University.

As ever, C4DM have a strong presence at this workshop, as two of the six presented talks are by current C4DM members. Ryan Stables, the event organiser, and others at the Digital Media Technology (DMT) Lab in Birmingham City University are currently collaborating with C4DM on the Semantic Audio Feature Extraction (SAFE) project. More information on this project can be found here

Josh Reiss will present a summary of the current state of the art in Intelligent Music Production, highlighting current research directions and the implications of this technology. Brecht De Man will present some of his PhD results in perceptual evaluation of music production as he attempts to understand how mix engineers carry out their work. Further to this, Alex Wilson was a previous C4DM visiting student for six months, and will be presenting his recently publishing work from Sound and Music Computing Conference, in navigating the mix space.

More information on the workshop, including abstracts and registration, can be found here http://www.aes-uk.org/forthcoming-meetings/aes-midlands-workshop-on-intelligent-music-production/.

Listening In The Wild

Today, 28th August 2015, C4DM presented a one day workshop entitled Listening In The Wild, organised by Dan Stowell, Bob Sturm and Emmanouil Benetos.

The morning session presented a range of research including sound event detection using NMF and DTW techniques, understanding detectability variations of species and habitats, animal vocalisation synthesis through probabilistic models.

The post lunch session saw discussion on vocal modelling and analysis working towards understanding how animals produce their given associated sounds. Following this there was further discussion on NMF followed by work on using bird songs as part of a musical composition.

The poster session included work on auditory scene analysis, bird population vocalisation variations, CHiME: a sound source recognition dataset, technology assisted animal population size measures, bird identification through the use of identity vectors, and DTW for bird song dissimilarity.

Further information on the presenters and posters is avaliable here