In this episode of The Immersive Audio Podcast, host Oliver is joined by Darren Emerson who is producer, director and co-founder of VR City and East City Films.
Recorded in Christmas 2018, during the production of interactive documentary ‘Common Ground’, which recently had its World Premiere at the Tribeca Film Festival 2019, Darren discusses the challenges of devising engaging and innovated work and what it takes to push the medium forward in a highly competitive industry. He talks to us about his journey into VR, his earlier VR documentaries “Indefinite” and “Witness 360:7/7”, and the challenges of running a business.
We also learn about how funding has developed for creatives and how the availability of financial support is helping to expand and grow the industry, enabling people to use new technology to be able to break rules and to push the boundaries, creating new and exciting ways of storytelling.
For this episode of the Immersive Audio Podcast, Oliver Kadel is joined via Skype by Dave Malham, ambisonics researcher, retired Experimental Officer at the University of York and half of the team behind the FuMa format.
Dave Malham’s professional interests are in digital audio and related computing systems, post-stereo multidimensional sound projection systems such as ambisonics, electroacoustic music and recording engineering. He worked in the Department of Music from 1973 to 2012 and was Experimental Officer in the Music Research Centre with special responsibility for the Music Technology Group, which he helped found in 1986.
During the 1980s he was responsible for the hardware and low level software that enabled the Composers Desktop Project computer music system to be realised on Atari ST computers. He developed this into the Audio Design SoundMaestro digital audio editing system.
Since then he has been responsible for the design of the Focusrite Blue245 20 bit, the Audio design PB4 18 bit and PB4+ 24bit stereo audio ADCs, as well as the microcontrollers, sensors and RF link technologies for the RIMM project and the hardware for Craig Vear’s “Singing, Ringing Buoy” project. He has written a number of VST plugins for ambisonic processing, the “MRC Stereometer” which implements Bob Katz’s K-system metering system as a VST plugin and, with Matt Paradis, the “ambilib” ambisonic processing library for PD as well as Max/MSP.
His research relates to digital audio, signal preservation, sound spatialisation and recording techniques. He has engineered 18 LPs and CDs and has edited several others. His research topics include advanced sound spatialisation technologies, the applications of spatialisation systems in musical composition and the development of sensing devices for musical performance applications. He has been an Audio Engineering Society member since 1975 and he has a patent, WO02085068, for the Ambisonic Sound Object Format.
Today, Oliver and Dave discuss the impact that Malham has had on the Immersive Audio industry we know today, audio production in the 70s and 80s, and the advancements of ambisonics into the digital era.
This episode was produced by Abbigayle Bircham, Gillian Duffy, Oliver Kadel, Felix Thompson and included music by Knobs Bergamo.
If you can, head to our page on iTunes and leave us a review and rating – it really helps us out in pushing our show further! The podcast is also available on Soundcloud and Stitcher.
Would you like to hear from a particular person, company or a certain topic area in the XR industry? Please get in touch at firstname.lastname@example.org telling us what you want to hear on the Immersive Audio Podcast.
Visit 1618digital.com to access the show notes and other episodes. Follow us @1618digital on Twitter and Instagram.
For this episode of the Immersive Audio Podcast, our team Oliver Kadel and Felix Thompson bring you the highlights from Immerse(d) event in London, bringing you interviews with some of the main speakers and other guests from the day.
Having already held events in Los Angeles and Montreal, Subpac’s event series headed to Ravensbourne University in South East London last weekend.
These meetings have brought together artists, scientists, practitioners and technologists, to explore how deep immersive sound and music are changing society.
There were several panels throughout the day, which looked at the deep cultural roots of bass in British Afro-Caribbean culture, and concepts such as how sound can be used to improve health and mindfulness.
There were several installations too. Guests were able to try out immersive experiences such as “Internal Garden”. By putting on a SUBPAC and headphones, users were able to hear and feel signals from plants.
Listen to Podcast
James Edward Marks
James E. Marks is an experimental new media provocateur. He is a creator of award-winning social video edutainment, and maker of immersive mixed and virtual reality experiences. With over 20 years of hands-on collaborations with alternative, pop culture branded and unbranded entertainment, James is Co-Founder & Chief Marketing Officer for DoubleMe, a Silicon Valley transformative tech start-up that’s pushing the boundaries of holographic mixed reality. He is also the founder of PsychFi Lab & Hackstock Festival. At PsychFi, he collaborates with the biggest social video & moving image artists, exploring immersive tech, psychology, psychedelia, sci-fi and pop culture.
Gary Pritchard is the Dean of the School of Media at Ravensbourne University. Ravensbourne is an innovative, industry-focused college based in Greenwich, that prepares students for careers in digital media and design.
Prior to his role at Ravensbourne University, he has significant experience at executive level in the both state and private education sectors and in commercial environments.
As Director of Talent at Forward 3D – a global digital marketing agency – Gary helped develop their leadership and development programme.
Magic beans is an immersive-audio startup that was founded by 3D audio pioneers Gareth Llewellyn and Jon Oliver. It maps high-definition spatial audio into the world around you – creating new kinds of mixed reality audio experience for brands, artists and visitor attractions.
Magic Beans work includes helping to create DotDotDot’s groundbreaking immersive theatre experience SOMNAI, while Gareth and Jon were the sound team behind The Philharmonia Orchestra’s acclaimed VR Sound Stage. Premiering in April at SXSW 2018 it went onto receive Royal approval from Prince William.
Earlier in their careers, Gareth and Jon were pioneers in 3D audio for cinema, mixing the world’s first immersive feature ‘Red Tails’ – in partnership with Skywalker Sound and LucasFilm.
Justin Wiggan is an artist who has worked in a variety of mediums, including sound, phonics, film and performance. His exhibitions have been displayed around the world, and at the Immerse(d) event was demonstrating his latest project – “Internal Garden: Life of Plants”. By using SUBPAC technology and headphones, Justin offers users an immersive experience that lets them feel and hear plant signals.
Lena has 17 years of hearing therapy experience. Drawing on her own background of hearing difficulties, she aims to help other adults with hearing loss to better understand and manage their conditions. She completed her training at Bristol University and the Royal National Throat Nose and Ear Hospital in 2004 and is a member of the Registration Council for Clinical Physiologists. Over the years Lena has held several clinical posts. These have included setting up the first Hearing Therapy service in Hertfordshire and working as an Advanced Hearing Therapist for 6 years at the Royal National Throat Nose and Ear Hospital.
From his work as an International DJ to his current role as a sound architect, Tom Middleton has spent decades working in audio. Tom started his professional career as an Electronic Neo-Classical Composer and international touring DJ and Artist – sharing stages with the likes of Lady Gaga and Kanye West. Today, he designs psychoacoustic soundtracks and soundscapes for major brands and organisations. His audio sensory company SONUX aim to help global firms reduce stress, improve sleep, boost productivity, increase resilience and enhance performance.
Ari Peralta is the CEO and Founder of Arigami, a company that uses cutting edge neuroscience research into multi-sensory perception to help brands leave a lasting impression on consumers. Ari began his career at Nielsen Media Research, and has since become a Forbes recognized innovator and serial entrepreneur. Through his company Arigami, Ari now focuses on building the connection between multisensory cues, emotions and memory.
For this episode of the Immersive Audio Podcast, Oliver Kadel is joined via Skype by Adam Levenson, the Vice President of Business Development at Gaudio Lab.
Based in Seoul, Gaudio Lab develops audio technology solutions for VR and streaming media including the Sol VR360 SDK and the Sol Loudness SDK available for licensing now. Working with the likes of Honda Innovation and Naver Corporation, Gaudio Lab received the “Innovative VR Company of the Year” award from the AMD Studios VR Awards in 2017.
Adam Levenson has 25 years of experience in audio production and technology. During his tenure as Senior Director at Activision, Adam established the Central Audio team supporting work on major franchises such as Call of Duty, Skylanders, James Bond, Spider-Man, and Transformers. He’s since worked in high-level positions at companies like Somatone Interactive, CRI Middleware, and Krotos.
On this episode, Oliver and Adam discuss the topics of loudness, audio quality in streaming and standards for creators.
This week on the Immersive Audio Podcast, Oliver is joined by Dr Alex Southern, a Principal Consultant and the Auralisation Lead for the engineering and construction consultancy company AECOM.
Alex is also a former Royal Society Industry Fellow and the most recent winner of the Institute of Acoustics Young Person Innovation Award in Acoustical Engineering for his work in Auralisation as well as recently winning an award for his work on the A303 Stonehenge project.
Today, Oliver and Alex cover the topics of the role of audio in architecture and engineering, auralisation for civil-engineering projects, and how academia and research drive the future of audio in engineering, and the challenges within the Acoustics Industry.
For this episode, Oliver is joined via Skype by Will Buchanan, Director of RPPTV. Starting out recording his own music for his band, Will picked up a part-time job in a recording studio whilst studying Astrophysics at the Queen Mary University of London, juggling both personal and client work. He soon moved into becoming a Music Producer for a short time before moving into producing music videos and films as well, eventually landing him the role as Director at RPPTV.
RPPTV develops simple to use media production tools, recently focusing on audio production to support creators. Working closely with experts and academics, they aim to create groundbreaking technology to take the next steps into the future of audio production. They’ve worked with the likes of Innovate UK, Salsa Sound and Mixed Immersion, as well as a variety of educational institutions such as the University of York, the University of Surrey and the University of Salford in Manchester.
Today, Oliver and Will discuss Immersive Audio in the music and film industries, the ASSIGN project and procedural audio, as well as engaging academic research in the creative processes to make more innovative products.
Today, Oliver was joined in studio by Dr. Gavin Kearney, Senior Lecturer in Audio and Music Technology at the University of York. Gavin received an honours degree in electronic engineering from Dublin Institute of Technology, in 2002 and M.Sc. and Ph.D. degrees in audio signal processing from Trinity College Dublin in 2006 and 2010 respectively. He subsequently worked as a Postdoctoral Research Fellow on game audio, while lecturing on the Interactive Digital Systems and Music and Media Technology masters courses at Trinity College Dublin.
He was appointed Lecturer in sound design at the Department of Theatre, Film, and Television at the University of York in January 2011 where he currently teaches both bachelors and masters level courses on spatial audio and surround sound, audio engineering and sound production and postproduction methods. Gavin also continues to work in the audio industry as a sound engineer and designer.
In this episode, Gavin focuses on ongoing research, industry practice standards and enhancing audio description.
Audio extracts are taken from the first-person drama Pearl, a film produced at the University of York with Binaural enhanced audio, and a music recording session from Abbey Road Studios featuring Nova Neon.
For this episode of the Immersive Audio Podcast, Oliver Kadel is joined in the studio by Christophe Mallet, the Commercial Director for the London-based company Somewhere Else. The company specialises in immersive tech, most notably working with virtual reality to help companies improve their relationships both with audiences through advertising, and with their employees through VR training experiences.
Starting out in strategic business consultancy, Christophe moved on further into digital and social media consultancy as well as working with an experimental music label. Through a friend, he was introduced to the world of VR through an exhibit allowing patrons to enter the scene of Van Gogh’s The Night Cafe. This was the defining moment that prompted him to start up Somewhere Else, which has since gone on to work for the likes of Samsung, Adidas and The Champions League.
Christophe and Oliver touch on a number of subjects surrounding VR, including the Artist’s Perspective and immersive properties of art, the purpose in Creating Immersive Content, immersive Storytelling and the Audience’s Role and the Responsibilities of the Creator.
In today’s episode, Oliver was joined via Skype by Dr. Hyunkook Lee, Senior Lecturer in Music Technology and Production and the leader of the Applied Psychoacoustics Lab (APL) at the University of Huddersfield. Hyunkook joined Huddersfield in 2010 and developed research in the area of 3D audio psychoacoustics as well as undergraduate modules such as Acoustics and Concert hall recording technique. In 2014 he established the APL, a research group studying the mechanism of human auditory perception and developing new audio algorithms for practical applications. He has undertaken a number of consultancy works for companies such as Samsung Electronics, Volvo Car and L-ISA.
Hyunkook is also an experienced recording and mixing engineer specialising in acoustic music.
Before joining Huddersfield, Dr Lee was a Senior Research Engineer at LG Electronics in South Korea, where he led a project to develop audio post-processing algorithms for LG mobile phones. He has also participated in MPEG audio codec standardisation activities, contributing to the developments of codecs such as SAOC and USAC. Hyunkook graduated from the music and sound recording (Tonmeister) course at the University of Surrey in 2002. During the course he spent a placement year as an assistant engineer at Metropolis studios in London. He gained his PhD from the same university in 2006.
His PhD research was concerned with the subjective effects and objective measurements of interchannel crosstalk in multichannel microphone techniques, and as a Senior Lecturer, he now spends his time tutoring and guiding aspiring students in the research of 3D sound and continues to further progress the academic understanding of the subject.
In this episode, Dr Hyunkook Lee talks to 1.618 Digital about a variety of topics under 3D Sound and Ambisonics: Psychoacoustics, microphone and recording techniques, and theories such as Phantom Image and Elevation Perception. He also shares with us his personal researching tips for audio engineering students, the importance of realising the value of your own research and believing in the work you do for eventual real-world applications.
Autonomous Sensory Meridian Response, more commonly known as ASMR, is one of the most curious phenomenons to grace the science of sound whilst maintaining a vast audience all across the globe. Through the power of the internet and word of mouth, more and more people are actively looking for videos of people scratching microphones, tapping fingernails and softly whispering into extremely sensitive mics, giving its listeners a sensory response like no other.
Sometimes described as Brain Tingles, Brain Massages and Brain Orgasms, listening to different triggers results in a small euphoric sensation for those that experience ASMR. The epicentre of the tingles and shivers, with the effects travelling down the shoulders and back (and, in some cases, to limbs), gives a sense of relaxation and peacefulness which some researchers believe may have positive effects on health and wellbeing. Not everyone responds to the same triggers, and some don’t have the response at all.
This is theorised to be linked to the perceptions of closeness and elements of care associated with certain sounds and sensations, which we as humans react to in the same way a child reacts to being held close to their mother, her hand running through their hair with comfort. It makes us feel safe and secure, and less troubled by the world around us because we’ve shut it out to focus our attention on these sensory triggers. So for someone looking for a sense of relationship and being cared for, ASMR offers a form of respite from the lack of those feelings, even if only in the short-term. One only has to search ASMR into Google or YouTube to find a plethora of channels and videos made by ASMRtists, freely accessible for the public to use to their heart’s content.
From what is to be considered the very first ASMR video uploaded by WhisperingLife in 2009 to new content being created every week, videos have evolved and changed to become more and more immersive with role-play and effects, yet they still hold true to their initial intended purpose of audible stimulation. The production of these videos can be complex – props, costumes, camera and SFX being elements in some examples – but in its simplest form, they only require the soft satisfying sounds and a binaural microphone to be effective. This acts to split the audio recorded into stereo sound through your headphones – one microphone for each ear that gives the illusion of closeness and proximity as the source of sound moves around you in a 3D space.
Scientifically speaking, there a is very finite amount of research materials on the subject – the term ASMR was only recently coined in 2010 by Jennifer Allen, with the most prolific of research conducted by the ASMR University, run by Dr Craig Richard. But since its rising popularity online, more and more material is being produced in the aid of the scientific exploration of ASMR. Worldwide surveys, academic papers/pieces and books are just some of the examples of media exploring new angles from biological to social influence to the deconstruction and study of each individual aspect that comes together to create the trigger.
To find out more about ASMR with interviews from a variety of experts and creators, listen to our Immersive Audio Podcast episode about ASMR on iTunes and Soundcloud!