My thesis has been submitted. It was submitted back in June, but it took me some time to get back to this. The title is “Perceptual Evaluation of Synthesised Sound Effects”, and a summary is available below
Having spent the past week working at the Albany in Deptford. We produced a 360 degree surround sound experience for Warsnare, a Deptford based DJ and producer https://soundcloud.com/warsnare.
27 Genelec Speakers and 4 subwoofers 5 different stages, with live performance spatialised and mixed in with pre-recorded and spatial elements were used to produce a fully immersive experience for a sold out audience, as such tickets to watch from the balcony were also produced.
It has been quite a while since I have posted, but I hope to resolve that shortly with a number of academic papers being published this summer,
In the meantime, there is some discussion over the use of sound effects in port production, and the fundamental fact that many things you hear as part of a soundscape are not the original recorded sound – this is the one of the fundamental justifications for my PhD and this is very well explained in this TED Talk:
Last weekend saw the 140th Convention of the Audio Engineering Society — Europe’s largest gathering of audio professionals from around the globe, take place at Paris’ Palais des Congrès. From cutting edge research to fundamentals to practical application, the four-day technical program brings the opportunity to network with and learn from leading audio industry luminaries. Special events — including technical tours of premier production facilities and installs, student focused sessions and a 3 day manufacturer exposition round out the Convention. There was a particular focus on 3D and immersive audio at this Convention.
I was responsible for running all aspects of the student track of the convention, including Education and Career Fair, Student Design Competition, Recording Competitions and the Education Committee Meetings. At the end of the Convention I was promoted to Chair of the Student Delegate Assembly for Europe and International Regions.
The weekend saw the 139th Convention of the Audio Engineering Society in Javits Convention Center in New York City. The annual American AES Convention is the world’s main event for all things audio, spanning a wide range of topics including loudspeaker design, music production, hearing aids, game audio and perception, and featuring a huge trade show as opposed to its less industry-heavy annual European counterparts.
A handful of C4DM delegates (Joshua D. Reiss, György Fazekas, Thomas Wilmering, David Moffat, David Ronan, and Brecht De Man) were each involved in multiple sessions.
D. Ronan, B. De Man, H. Gunes and J. D. Reiss, “The Impact of Subgrouping Practices on the Perception of Multitrack Music Mixes” [Download paper]
Dave Ronan also presented at the Student Design Exhibition with a physical model of a sitar based on a dynamic delay line and the Karplus-Strong model.
Workshops and tutorials
Workshop W20: “Perceptual Evaluation of High Resolution Audio” (Joshua D. Reiss (chair), Bob Katz, George Massenburg and Bob Schulein)
Tutorial T21: “Advances in Semantic Audio and Intelligent Music Production” (Ryan Stables (chair), Joshua D. Reiss, Brecht De Man and Thomas Wilmering)
Workshop W26: “Application of Semantic Audio Analysis to the Music Production Workflow” (György Fazekas (co-chair), Ryan Stables (co-chair), Jay LeBoeuf and Bryan Pardo)
Brecht De Man and Dave Moffat were responsible for the organisation of the entire Student and Career Development track as the Chair and Vice Chair of the Student Delegate Assembly (Europe and International Regions). These events include a student party (this edition at NYU’s James L. Dolan’s Music Recording Studio), Student Recording Competition, Student Design Competition, and a very successful edition of the Education and Career Fair.
Dave Ronan represented Queen Mary at the latter, discussing the various taught and research courses with an emphasis on the new MSc in Sound and Music Computing and handing out a lot of QM swag.
High Resolution Audio Technical Committee: Josh
Semantic Audio Analysis Technical Committee: György and Thomas
Education Committee: Dave Moffat and Brecht
Josh also serves as a member of the Board of Governors of the AES.
Upcoming AES events with a C4DM presence
AES UK Analogue Compression – Theory and Practice at British Grove Studios, London, UK (12 November 2015) Members only
Organised by Brecht and 2014-2015 MSc student Charlie Slee
AES UK Audio Signal Processing with E-Textiles at Anglia Rusking University, Cambridge, UK (26 November 2015)
By Becky Stewart (PhD graduate and visiting lecturer)
60th Conference on Dereverberation and Reverberation of Audio, Music, and Speech (DREAMS in Leuven, Belgium (3-5 February 2015)
Several C4DM papers including
David Moffat and Joshua D. Reiss. “Dereverberation and its application to the blind source separation problem”. In Proc. Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society, February 2016.
61st Conference on Audio for Games in London, UK (10-12 February 2015)
Brecht and Dave on committee, C4DM papers submitted
140th Convention of the Audio Engineering Society in Paris, France (4-7 June 2016)
If you are attending as a student (undergraduate, master, PhD), please get in touch with Brecht or Dave, and consider submitting a project to the Student Design Competition or Student Recording Competition to receive feedback from industry experts and prizes.
For any questions about the Audio Engineering Society regarding e.g. membership, publications, and local events, please contact Brecht (Chair of the Student Delegate Assembly, Chair of the London UK Student Section, and Committee Member of the British Section) or Dave (Vice Chair of the Student Delegate Assembly).
Teaching has started again and I am once again a teaching assistant for Sound Recording and Production Techniques. On the 8th October, I was lecturing the class on Microphone Types, covering for the usual lecturer as he was away at a conference.
My teaching review from last year on the student feedback forms was 95%, so the hope is to maintain a high standard of student support and teaching throughout the course this year.
Teaching for this semester is for Sound Recording and Production Techniques, a MSc level course on studio work and audio. Next semester I will be developing labs and teaching on the undergraduate course Introduction to Audio.
He discussed how the crowd funding sources, how to budget for small start up projects. The importance of open source, both in terms of software and hardware was discussed at length, and is a vital aspect of what the OWL team set out to do.
The OWL is a custom build programmable guitar effects pedal that allows anyone to write their own effect pedal and load it onto the standalone program. Effects can be written in C++, Faust or even Pure Data (PD). There is also a wrapper that allows users to run their patches as a VST or AU within a Digital Audio Workstation and in the future, it will also be possible to run patches in the browser. Recently a modular synthesiser version of the Owl has also been released.
The Hoxton Owl is a programmable guitar effects pedal developed around an ARM Cortex M4 chip. The pedal is fully programmable, to allow users to create any custom patch that they require.
Recently, I have been developing some basic patches for the Owl, which can be found in the owl patch library – http://hoxtonowl.com/patch-library/. Developing some basic patches in C, with use of my DSP knowledge.
The Owl is a stable, reliable and fun piece of hardware to use and allows users an infinite number of different effects which can be bespoke designed for any specific applications.
Yesterday, the AES presented a workshop on Intelligent Music Production. The day started with a great discussion of the current state of the art of Intelligent Music Production, with strong indications as to where the future of the research will occur, provided by Josh Reiss. Hyunkook Lee presented some interesting work on 3D placement of sources in a mix, and how to separate tracks based on the perceived inherent height of different frequency bands. Brecht De Man discussed his PhD work on subjective evaluation of music mixing, and his path to understand how people go about producing their preferred mix of music, and how this is perceived by others.
Following this, Sean Enderby provided an energetic talk on a set of SAFE tools produced at BCU for attaching semantic terms to presets for a range of audio effect plugins. Alessandro Palladini from Music Group UK, presented their current work on “Smart Audio Effects for Live Audio Mixing” which included interesting work on multiple side chained and parameter reduced effects and new methods and tools to provide mix engineers, both in the studio and in live music scenarios. Their research is focused around providing an intuitive set of tools that remain perceptually relevant. Alex Wilson presented his work on how participants mix a song in a very simplified mix simulation and how the starting positions will impact the final mix that participants will produce.
Videos of all the presentations is available here: http://www.semanticaudio.co.uk/media/