I have been accepted to publish my MSc project on Dereverberation applied to Microphone Bleed Reduction.
I implemented existing research in reverb removal and combined with with a method for microphone interference reduction. In any multiple source environment there will interference from opposing microphones as pictured below.
Research at Queen Mary University allows this interference to be reduced in real time processing and my project was to improve this with the addition of removing natural acoustic reverberation in real time, to assist with the microphone bleed reduction.
This work will be published at the AES conference on DREAMS (Dereverberation and Reverberation of Audio Music and Speech).
David Moffat and Joshua D. Reiss. “Dereverberation and its application to the blind source separation problem”. In Proc. Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech). Audio Engineering Society, 2016. to appear.
The features available within a list of ten audio feature extraction toolboxes is presented, and a list of unique features is created. Each tool is then compared to the total list of unique features. Each tool is also evaluated based on the feature coverage when compared to the MPEG-7 and Cuidado standard feature sets. The relative importance of audio features is heavily context based. To provide a meaningful measure of the relative importance of audio features within each toolbox, the toolboxes will be compared to their compliance with the MPEG-7 and Cuidado standards. The results of this can be seen below.
The accuracy of these audio features is presented here: https://github.com/craffel/mir_eval
Further information and detailed analyses will be presented in my upcoming paper:
David Moffat, David Ronan and Joshua D. Reiss, “An Evaluation of Audio Feature Extraction Toolboxes,” In Proc. 18th International Conference on Digital Audio Effects (DAFx-15), November 2015, to appear.
I have recently been working on Evaluation of Audio Feature Extraction Toolboxes. I have had a paper accepted to DAFx on the subject. While there are a range of ways to analyse and each feature extraction toolbox, the computational time can be an effective evaluation metric. Especially when people within the MIR community are looking at larger and larger data sets. 16.5 Hours of audio, 8.79Gbs of audio, was analysed, and the MFCC’s using eight different feature extraction toolboxes. The computation time for every toolbox was captured, and can be seen in the graph below.
The MFCCs were used, as they are a computational method, that exists within nine of the ten given tool boxes, and so should provide a good basis for comparison of computational efficiency. The MFCCs were all calculated with a 512 sample window size and 256 sample hop size. The input audio is at a variety of different sample rates and bit depths to ensure that variable input file formats is allowable by the feature extraction tool. This test is run on a MacBook Pro 2.9GHz i7 processor and 8Gb of RAM.
More information will be available in my upcoming paper “An Evaluation of Audio Feature Extraction Toolboxes” which will be published at DAFx-15 later this year.
The Hoxton Owl is a programmable guitar effects pedal developed around an ARM Cortex M4 chip. The pedal is fully programmable, to allow users to create any custom patch that they require.
Recently, I have been developing some basic patches for the Owl, which can be found in the owl patch library – http://hoxtonowl.com/patch-library/. Developing some basic patches in C, with use of my DSP knowledge.
The Owl is a stable, reliable and fun piece of hardware to use and allows users an infinite number of different effects which can be bespoke designed for any specific applications.
Yesterday, the AES presented a workshop on Intelligent Music Production. The day started with a great discussion of the current state of the art of Intelligent Music Production, with strong indications as to where the future of the research will occur, provided by Josh Reiss. Hyunkook Lee presented some interesting work on 3D placement of sources in a mix, and how to separate tracks based on the perceived inherent height of different frequency bands. Brecht De Man discussed his PhD work on subjective evaluation of music mixing, and his path to understand how people go about producing their preferred mix of music, and how this is perceived by others.
Following this, Sean Enderby provided an energetic talk on a set of SAFE tools produced at BCU for attaching semantic terms to presets for a range of audio effect plugins. Alessandro Palladini from Music Group UK, presented their current work on “Smart Audio Effects for Live Audio Mixing” which included interesting work on multiple side chained and parameter reduced effects and new methods and tools to provide mix engineers, both in the studio and in live music scenarios. Their research is focused around providing an intuitive set of tools that remain perceptually relevant. Alex Wilson presented his work on how participants mix a song in a very simplified mix simulation and how the starting positions will impact the final mix that participants will produce.
Videos of all the presentations is available here: http://www.semanticaudio.co.uk/media/
The 8th September 2015 sees the Audio Engineering Society UK Midlands Section presenting a workshop on Intelligent Music Production at Birmingham City University.
As ever, C4DM have a strong presence at this workshop, as two of the six presented talks are by current C4DM members. Ryan Stables, the event organiser, and others at the Digital Media Technology (DMT) Lab in Birmingham City University are currently collaborating with C4DM on the Semantic Audio Feature Extraction (SAFE) project. More information on this project can be found here
Josh Reiss will present a summary of the current state of the art in Intelligent Music Production, highlighting current research directions and the implications of this technology. Brecht De Man will present some of his PhD results in perceptual evaluation of music production as he attempts to understand how mix engineers carry out their work. Further to this, Alex Wilson was a previous C4DM visiting student for six months, and will be presenting his recently publishing work from Sound and Music Computing Conference, in navigating the mix space.
More information on the workshop, including abstracts and registration, can be found here http://www.aes-uk.org/forthcoming-meetings/aes-midlands-workshop-on-intelligent-music-production/.