The 61th International Conference of the Audio Engineering Society on Audio for Games took place in London from 10 to 12 February. This is the fifth edition of the Audio for Games conference which features a mixture of invited talks and academic paper sessions. Traditionally a biennial event, by popular demand the conference was organised in 2016 again following a very successful 4th edition in 2015.
Christian Heinrichs presented work from his doctoral research with Andrew McPherson, discussing Digital Foley and introducing FoleyDesigner, which allows for effectively using human gestures to control sound effects models.
I presented a paper in the Synthesis and Sound Design paper session, on weapon sound synthesis and my colleague William Wilkinson presented work on mammalian growls, both of which can be found in the conference proceedings.
Furthermore, Xavier Serra and Frederic Font presented the Audio Commons project and how the creative industries could benefit from and get access to content with liberal licenses.
Along with presenting work at this conference, I was also involved as the technical coordinator and webmaster for the Audio for Games community.
More information about the conference can be found on the conference website.
During the DAFx conference dinner, awards for the best papers were announced. Honourable Mentions:
- An Evaluation of Audio Feature Extraction Toolboxes by David Moffat, David Ronan and Joshua D. Reiss
- Improving the robustness of the iterative solver in state-space modelling of guitar distortion circuitry by Ben Holmes and Maarten van Walstijn
- Digitizing the Ibanez Weeping Demon Wah Pedal by Chet Gnegy and Kurt Werner
- Two polarisation finite difference model of bowed strings with nonlinear contact and friction forces by Charlotte Desvages and Stefan Bilbao
- A Model for Adaptive Reduced-Dimensionality Equalisation by Spyridon Stasis, Ryan Stables and Jason Hockman
- Harmonic Mixing Based on Roughness and Pitch Commonality by Roman Gebhardt, Matthew Davies and Bernhard Seeber
As posted on the DAFx website – http://www.ntnu.edu/dafx15/
Day three of the Digital Audio Effects Conference (DAFx15) began with an excellent introduction and summary of Wave Digital filters and Digital Wave Guides by Kurt Werner and Julius O. Smith from CCRMA, in which the current state of the art in physical modelling no nonlinearities was presented and some potential avenues for future exploration was discussed. Following on from this work was discussed
- identification of metrical structure of music, by Elio from C4DM
- research on whether computer games noticeably prefer spacial audio, from York University
- Discussion and evaluation of feature extraction toolboxes, when to use different feature extraction tools, and how we can develop them in the future, by Dave from C4DM
- Work on vocal tract modelling from York, PPCU Budapest and KTH Sweden.
Day two of DAFx conference in Trondheim NTNU opened with Marije Baalmans keynote on the range of hardware and software audio effects and synthesisers are available to artists, and how different artists utilise these effects. This talk was focused primarily on small embedded systems that artists use, such as Arduino, Beaglebone Black and Raspberry Pi. Later in the day, some excellent work including:
- Granular Synthesis was presented by Sadjad Siddiq from Square Enix,
- A collaboration on synthesising Percussive Drilling Sounds, between IRCAM and HUT,
- Using a modal reverberator structure to modify samples from CCRMA
- Work on intelligent multitrack audio subgrouping by Dave Ronan and Dave Moffat from the Center for Digital Music, Queen Mary University London
The DAFx conference began with a tutorial day, where Peter Svensson provided a fantastic summary of the State of the Art in sound field propagation modelling and virtual acoustics.
During lunch, as it was getting dark, the snow started, which unfortunately blocked our view on the Northern Lights that afternoon. Øyvind Brandtsegg & Trond Engum then discussed Cross adaptive digital audio effects and their creative use in live performance. He referenced existing work at Queen Mary as some of the state of the art in existing work, and then presented NUTU’s current work on Cross Adaptive Audio Effects. The workshop day was rounded off with Xavier Serra discussing the Audio Commons project and use of open audio content.
The weekend saw the 139th Convention of the Audio Engineering Society in Javits Convention Center in New York City. The annual American AES Convention is the world’s main event for all things audio, spanning a wide range of topics including loudspeaker design, music production, hearing aids, game audio and perception, and featuring a huge trade show as opposed to its less industry-heavy annual European counterparts.
A handful of C4DM delegates (Joshua D. Reiss, György Fazekas, Thomas Wilmering, David Moffat, David Ronan, and Brecht De Man) were each involved in multiple sessions.
D. Ronan, B. De Man, H. Gunes and J. D. Reiss, “The Impact of Subgrouping Practices on the Perception of Multitrack Music Mixes” [Download paper]
Dave Ronan also presented at the Student Design Exhibition with a physical model of a sitar based on a dynamic delay line and the Karplus-Strong model.
Workshops and tutorials
Workshop W20: “Perceptual Evaluation of High Resolution Audio” (Joshua D. Reiss (chair), Bob Katz, George Massenburg and Bob Schulein)
Tutorial T21: “Advances in Semantic Audio and Intelligent Music Production” (Ryan Stables (chair), Joshua D. Reiss, Brecht De Man and Thomas Wilmering)
Workshop W26: “Application of Semantic Audio Analysis to the Music Production Workflow” (György Fazekas (co-chair), Ryan Stables (co-chair), Jay LeBoeuf and Bryan Pardo)
Brecht De Man and Dave Moffat were responsible for the organisation of the entire Student and Career Development track as the Chair and Vice Chair of the Student Delegate Assembly (Europe and International Regions). These events include a student party (this edition at NYU’s James L. Dolan’s Music Recording Studio), Student Recording Competition, Student Design Competition, and a very successful edition of the Education and Career Fair.
Dave Ronan represented Queen Mary at the latter, discussing the various taught and research courses with an emphasis on the new MSc in Sound and Music Computing and handing out a lot of QM swag.
High Resolution Audio Technical Committee: Josh
Semantic Audio Analysis Technical Committee: György and Thomas
Education Committee: Dave Moffat and Brecht
Josh also serves as a member of the Board of Governors of the AES.
Upcoming AES events with a C4DM presence
AES UK Analogue Compression – Theory and Practice at British Grove Studios, London, UK (12 November 2015) Members only
Organised by Brecht and 2014-2015 MSc student Charlie Slee
AES UK Audio Signal Processing with E-Textiles at Anglia Rusking University, Cambridge, UK (26 November 2015)
By Becky Stewart (PhD graduate and visiting lecturer)
60th Conference on Dereverberation and Reverberation of Audio, Music, and Speech (DREAMS in Leuven, Belgium (3-5 February 2015)
Several C4DM papers including
David Moffat and Joshua D. Reiss. “Dereverberation and its application to the blind source separation problem”. In Proc. Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society, February 2016.
61st Conference on Audio for Games in London, UK (10-12 February 2015)
Brecht and Dave on committee, C4DM papers submitted
140th Convention of the Audio Engineering Society in Paris, France (4-7 June 2016)
If you are attending as a student (undergraduate, master, PhD), please get in touch with Brecht or Dave, and consider submitting a project to the Student Design Competition or Student Recording Competition to receive feedback from industry experts and prizes.
For any questions about the Audio Engineering Society regarding e.g. membership, publications, and local events, please contact Brecht (Chair of the Student Delegate Assembly, Chair of the London UK Student Section, and Committee Member of the British Section) or Dave (Vice Chair of the Student Delegate Assembly).
Teaching has started again and I am once again a teaching assistant for Sound Recording and Production Techniques. On the 8th October, I was lecturing the class on Microphone Types, covering for the usual lecturer as he was away at a conference.
My teaching review from last year on the student feedback forms was 95%, so the hope is to maintain a high standard of student support and teaching throughout the course this year.
Teaching for this semester is for Sound Recording and Production Techniques, a MSc level course on studio work and audio. Next semester I will be developing labs and teaching on the undergraduate course Introduction to Audio.
He discussed how the crowd funding sources, how to budget for small start up projects. The importance of open source, both in terms of software and hardware was discussed at length, and is a vital aspect of what the OWL team set out to do.
The OWL is a custom build programmable guitar effects pedal that allows anyone to write their own effect pedal and load it onto the standalone program. Effects can be written in C++, Faust or even Pure Data (PD). There is also a wrapper that allows users to run their patches as a VST or AU within a Digital Audio Workstation and in the future, it will also be possible to run patches in the browser. Recently a modular synthesiser version of the Owl has also been released.
I have been accepted to publish my MSc project on Dereverberation applied to Microphone Bleed Reduction.
I implemented existing research in reverb removal and combined with with a method for microphone interference reduction. In any multiple source environment there will interference from opposing microphones as pictured below.
Research at Queen Mary University allows this interference to be reduced in real time processing and my project was to improve this with the addition of removing natural acoustic reverberation in real time, to assist with the microphone bleed reduction.
This work will be published at the AES conference on DREAMS (Dereverberation and Reverberation of Audio Music and Speech).
David Moffat and Joshua D. Reiss. “Dereverberation and its application to the blind source separation problem”. In Proc. Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society, 2016. to appear.