Multimedia: Backdraft inspired sound sweetening

The movie Backdraft employs sound sweetening techniques, by layering animal noises over the top of roaring fire to instil a subconscious, primitive fear in the user. With roaring lions and shrieking monkeys, Backdraft implements a multitude of subtle and overt sounds to create an excitement and they are balanced in a way that doesn’t pull focus from the theme but actually intensifies it!

 

Using animalistic sounds in this context works because of our already learned, subconscious connections. We already know that a loud lion roar is scary. It sounds scary! and that’s because we’ve already learned this. So when we hear this sound out of context, we don’t immediately connect it to the conscious memory of a lion but the subconscious connection of the emotion of fear, because this cognitive path is shorter and is therefore received quicker. Think RAM.

 

To implement this I gathered a few animal sounds that I felt could work throughout the film, that correlated to certain themes. I then sprinkled these around and found that they worked well in a few places. When I found there resting place, I dragged them around a little until it felt right, and then started processing them a little. To make them stand up against the surrounding sounds, I parallel compressed the sounds to make them sit in the mix. I then bussed them to a fairly sparse reverb. I then juxtaposed this against a shiny wavetable synth in serum. The sounds that I used are a lot less aggressive and this resulted in them sounding better in more serene, less dark atmosphere.

 

 

Multimedia: ADR – Automated Dialogue Replacement

ADR is the process of re-recording dialogue, over the visually recorded scene. A process that has become much simpler over recent years thanks to the advancements of technology but has been used in film for countless years. In the years before digital recording technology this process was referred to as looping, because the scene would be physically spliced from tape and looped over and over for re-recording. Nowadays, the process is much faster and voice actors have an unlimited amount of opportunity to capture the perfect audio take.

ADR is usually implemented when there is a fault in the original audio’s take. These flaws can be anything from unwanted ambient noise (such as a plane flying overhead in the recording of a period piece) to unwanted audio from physical effects, such as a blowing fan, where the visual effect is wanted but the noise is not. Sometimes ADR can simply be the result of a creative change, such as the director deciding to change the voice of a certain character because it suits the themes of the film more.

A good example of ADR is the speeder chase in Star Wars episode II . In the recording of this scene, a large fan was used to emulate movement in the hair of the characters moving through a space in the stationary speeder, but this of course creates problems because the dialogue recorded on set is awash with the noise of a massive fan blowing. So by re-recording the audio, the creators were able to capture clean, coherent speech alongside immersive visual effects to create a credible, believable, immersive experience.

Multimedia: Mixing Film – Considering Variables of application and consumption.

There are many differences when mixing for Film when compared to the mixing of a song. These differences mainly arise in the desired application of the sound. For example, usually when we mix a song, we want to create a false but acceptable stereo image – the idea of a space. Now, this space can be anything we envisage that atmospherically corresponds with the recorded sounds. The way we achieve this is by recording sounds with ambient microphone techniques and applying effects such as EQ, compression and reverb in post production. But our approach must differ slightly when we approach mixing for film because our audience have a visual reference for the space and can therefore make subconscious judgements as to whether or not the audio sounds right within a space. The decision making process for the engineer should start by finding the ideal dynamic signature for each sound and progress to the ideal application of post-production effects both for the reality of a space and the desired effect of a sound.

To provide an overzealous example, if we had an instrument doused with a cathedral convoluted reverb in a song, as long as it works musically, dynamically and sonically alongside our other audio aspects, this is completely fine. However if we had a section of dialogue doused in the same reverb, but the visual reference for the space was a small concrete room, we would perceive this as unnatural, whether consciously or not we would know it’s not normal. This is all down to reference, the main tool of our sensory perception. I’m not saying that this would or has never be used, but it would feel unnatural and would therefore not be used to create a realistic nuance. Effectively mixing to visuals, creates another reference for glue – another medium to consider (the visual) and our decisions should be made accordingly. This is because of our brains natural tendency to reference patterns and correlations (our brain compares what we see and hear constantly so it know when something is amiss). This type of effect could however be used to connote an unnatural atmosphere. A great example of this type of processing is The Last Jedi episode of Star Wars in the telepathic scenes between Rey and Kylo Ren, the juxtaposed character archetypes of the film. Within the storyline, these characters have a telepathic bond, a fact which isn’t overtly stated until a fair way into the film. However, the audience are given this information by the audio. The two voices are placed within the same, ominous sounding space, telling our brain immediately that these two voices are communicating and exist within the same non-physical plane. In my opinion, this is a genius use of audio to foreshadow and develop a storyline.

Another attributing factor to the differences in mixing for film is the way in which it is consumed. Dynamic range and mixing in film is very much dependent on the audience. For example a large blockbuster film to be consumed in a cinema will be mixed so that the transient sounds are jarring and have a huge impact at around 100dB, the dialogue will be mixed so that it is loud enough to carry over the ambient sound of an audience (popcorn rattling, mumbled conversation and such) and will be compressed and mixed to sit at around 64dB relative to the audience. And the diegetic ambience of the film will sit just below. Music and score tend to sit somewhere between the highest transient sounds and the dialogue but should be applied subjectively depending on the desired effect. This is not necessarily so it is perceived as natural but because it provides the most immersive listening platform for the audience.

Once the audience has been considered, an engineer must also consider the listening environment. Using a cinema as the pinnacle and most dramatic of examples, one must consider the X-curve of a space. The SMPTE standard states that in the average cinema there will be a high roll off, algorithmically decreasing, attenuating the listeners perspective of frequencies between 2kHz to 20kHz because of the size of the space as well as it’s treatment. Cinema’s are designed to provide the most consistent listening platform possible. For example, the chairs are soft and plush not only for consumer comfort but because this makes the room sound more uniform regardless of its current capacity.

To achieve this it is important to know the system you are listening on and have solid points of reference for your audio level. An industry standard method for doing this is by running pink noise through the mixing system and calibrating all speakers to read at a universal volume from a single point (85dB is recommended). This is done with an external dB meter.

In conclusion, to create an effective mix for film one must not only consider the space we are creating within a film and the effect we wish this to have, but also the physical space in which our theoretical space is to be perceived. Then, to strike a critical, informed balance between all variables.

Multimedia: Audio Standards in Film – The Rising Tide

The optimum output volume for a selected media essentially comes down to the dynamic range of the content. For example it would be stupid to whack the faders up on a dialogue based piece with no extra-diegetic sound, because we would just be listening to REALLY loud talking. No, optimising a track for a certain volume refers to the difference between the dynamic range of these sounds and the best average volume to listen to the entire thing at. For example, have you ever been sat watching a film, where the main protagonists are having a really tense quiet conversation and then suddenly a mind bending explosion startles the neighbours two doors down, and you are left frantically padding around looking for the remote, before the next explosion strips you of your ability to hear? That is because that film is mixed to be LOUD, as a lot of modern films are. This is simply because volume is equal to impact.

 

This is definitely an area of ambiguity and a matter for contention in the modern industry. With the constant improvement of playback technology alongside the increasing ease of availability of home cinema systems, movie sound can get louder than ever before, without the risk of losing sonic integrity. The substantial rise in home Hi-Fi systems and the “bigger is better” mentality of loud story makers place us in a unique situation where loudness can be embedded in to the film with a lot less contemplation as to whether the audience will ever be able to truly receive it. In years past the standard volume for audio playback in a cinema setting, lay at around 85 dB. It is hard to place a single figure on this, because of course as is the case with any art, the beast defines the terms. Meaning that each case will of course have its own variables to consider and each mix should be different depending on what the film is trying to achieve. However, in more recent times mixes are being optimised for louder and louder playback, even sometimes reaching the eardrum blasting 108-110 dB. This will inevitably come down to the pressure applied from external sources. An experienced musician/engineer will know the detrimental, self defeating effects of making something louder, but the director will of course want to create the most affective piece of work possible and as we all know LOUD seems to be BETTER.

 

I suppose, now, with technology reaching its pinnacle it could be said that it is no longer a case of what sounds best, but how can he have the most impact without being unsafe. But is this really the avenue to explore if true quality is what is desired. Probably not right? Loud mixes can create a whole multitude of problems for example, the last action movie you saw, how much dynamic compensation were you having to apply with your TV remote between the dialogue sections and the all out action, probably a fair amount because these films cannot be listened to quietly. This is because the level that you’d have it set to, to listen to speech is way too high for the action and vice versa. This is the sound engineers attempt to impact you, overdriven by the directors pressure for volume. This having been said, sometimes there is absolutely nothing wrong with a really loud mix. I mean you wouldn’t want to watch an all-out action movie and not have it rattle your sofa, but how far is too far?

I suppose it depends on the beast.

 

Boyes States “We need to have some kind of even approach to dealing with clients to let them know that we agree that movies have gotten too loud. I think a real education process has to happen, and, if it does, it will be a real defining moment for film audio as we start the new century.”

Multimedia: Diegetic and Non Diegetic Sounds

A lot of sound design focusses on the creation of hyper-realism and creating larger than life sounds, however juxtaposing this sound sweetening against the more realistic relatable sounds can actually emphasise the scale of the sweetened sound. This method is used very commonly in modern TV. For example, episode 5 of MARVEL’s newly released Black Lightning, introduces the antagonist Tobius Whale, walking tall through the hallways of  crooked community centre to the aggressive thud of a lo-fi hip hop track, in semi slow motion. This scene immediately cuts to one of Whale’s victims screaming. This juxtaposition between the head bopping swagger of a slightly intimidating rap record and the isolated screams of this victim really introduces the nature of this character with effect. Whilst abrupt the cut between these sounds still allows for the resolution of the non diegetic track, before switching. This type of editing allows the experience of a media format to be immersive as well as jaunting and challenging, because the contrast between these two feelings and the lack of journey between them exposes the audience and creates tension. This method is used liberally across all visual media formats and is effective for a number of reasons.

 

Non diegetic sound is sound not experienced by the film’s characters and is usually used to describe the emotions of the character or the story itself, in some examples the non-diegetic soundtrack can actually summarise the emotion or genre of the film with great effect. A good example of a curated soundtrack that is very effective in describing genre and style is DRIVE. This consists mainly of synthwave music with dark overtones, with artists such as kavinsky and College, which perfectly encapsulate the themes of film noir.

WBL: Mixing outside of the stereo image

This is just outlining a little trick I’ve found for placing instruments outside of the stereo image. This is useful for ambience sounds that don’t require much active attention but help to fill out the sonic landscape of the track thats being mixed or if for any reason you want to throw a certain image really wide in the mix.

To achieve this effect we double the track that has been recorded with an identical recording pan them in opposition to the one another and then flip the phase of one of the signals. The reactant effect is  sound that appears much wider than before. This happens because the signal that has been flipped causes phase cancellation in one half of the stereo field forcing us to perceive the sound as very wide in relation to the entire stereo image.

WBL: Summative Report 1

So, my experiences throughout this project have been varied. Some have been very successful and insightful, whilst others seem almost futile. One of the processes that has proved somewhat useful is the process of mixing according to the advice of seasoned engineers. Mixing directly to the instructions of a video has proven to be very stressful for me, because I can hear the changes I would like to make but have had to restrain to be able to fully appreciate the engineers approach. However, in doing this I have been able to discover other ways of using equipment. By thoroughly watching the actions of people like Pensado, I have been able to analytically examine and learn exactly what he is doing and the way he applies certain technology. Processes like this have also been useful because they have inspired me to research more actively into mentioned characteristics and behaviours of devices. One of the restrictions I have faced in this process is plugin translation. Not having the same range of plugins and outboard that these guys do, I have been unable to explore the equipment they have used. Although this has helped me to explore my preferred plugins with more aggression and forced me to find the best alternative within the plugins that I do have, allowing me to gain a more expansive knowledge of my Plugin collection.

The most useful and enjoyable process as of yet has been the process of shadowing. This was most engaging for me because I was able to immerse myself in the process. Also, seeing and using studios that are way nicer than my own has inspired me and given me an idea of the things I can be aiming for within the field. This process also allowed me to explore newness and work outside of my comfort zone. I found the atmosphere of professional studio work to contain a fair amount of pressure. The main difference between my own sessions and these, lay in the fact that were people were expecting efficiency and for things to be successful first time. In a no pressure environment it is way easier to sit and fiddle with something until it is right, which often results in things working quickly. But when time pressure is applied and you are being payed, it is expected that you are able to achieve success instantly, which I have found for me often results in the polar opposite effect.

 

To improve in this area, I need to apply pressure to my mixing situations to become used to it. Hopefully, this is a case of repetition equals improvement.

 

 

WBL: The importance of reference tracks

A reference track in the context of mixing outlines the idea of listening to tracks of a similar production value and listening to what is going on within. As a labour of love and passion sometimes approaching music with a methodological mindset can be off-putting and seems a little boring. However, as Pensado states “mixing without referencing is like running a race without knowing who your competing against”. In this statement Pensado is outlining the importance of knowing whats going on around you. The fact is the perfect mix is very subjective and different audiences want a different product. So to know what we are supposed to be aiming for, we use reference tracks. This as a concept for me was a little deterring for a while. I wanted to be creative and have the freedom to do the things I enjoy. However, having implemented the use of reference tracks, I know understand that it in fact has the opposite effect. It gives you a goal and opens horizons by removing ego and knowledge from the act of mixing and provides a platform for experimentation.

Throughout the remainder of this year, I will be using reference tracks wherever necessary to improve the quality of my mixes. This having been said I have already come across a few flaws within the concept. Having found it good practice to ask the artists I’m working without to suggest a few reference tracks, bands, genres etc, I hit the inevitable wall of – They want to sound like that, but they sound nothing like that. I found the only way to conquer this problem is by politely talking with the artist and explaining, providing insight where they are not able to see and as kindly as possible urging them to either adjust the music or provide a different reference. It can also be hard sometimes to find a reference track suitable for the track I am mixing.

WBL: Treating my drum room.

Drums as an instrument create a huge amount energy and the key problem when recording them is capturing that energy in a pleasing manner. Microphone choice is very important in this process. However, it is largely recognised that it is hard to impossible to get a nice drum sound from an unpleasing room. To increase the quality of my drum recordings I should ensure that my drum room is as pleasing as possible.

The key problem with my room is its size. With the nature of drum sounds being very transient a lot of sound waves are dispersed around the room and if these waves reflect around too quickly (caused by short reflection duration), results can be unpleasing to the ear and harsh in the top end. To challenge this problem, I should apply diffusers to the walls in problem areas. Unfortunately, due to the size of the room it will be very hard to capture a really nice room sound.

To work on my drum room I will periodically test the room with a flat microphone paying special attention to the frequency cycles that would apply to drums. A kick drums bottom end dwells between 45-60Hz, the roundness 100-150Hz. The blur in a kick drum resides between 500 – 800Hz and the presence 5000 to 8000Hz. The body of a snare exists at around 500 Hz, its punch at around 5000-8000 Hz and the hiss anywhere between 8kHz and 12 kHz. The presence of symbols exists in the regions of 800 – 6kHz, the clarity 6 – 8kHz and their brightness between 8 – 12kHz.

With this in mind we are able to flatten a room according to room testing software such as Room EQ wizard and the application of different sound treatments. Throughout this process I have

Here is a correlational graph depicting the effects of the changes to my room so far.

Screen Shot 2018-04-24 at 18.19.19.png

 

 

For more information on the types of acoustic treatment available try:

https://www.soundonsound.com/sound-advice/beginners-guide-acoustic-treatment

 

 

Multimedia: Not writing to grid

This was a learning curve for me. Having to write sections of music that don’t correspond to a set time signatures grid. For example, writing half bar segments here and quarter bar segments there all mapped to different appropriate tempos and time signatures. This was a challenge. To get around this perplexing music theory I decided to take a different approach. Instead of planning out a score and crowbarring it in to the piece. I picked a few instruments and just began writing to the film, adjusting the tempo of the section and writing it in. This approach allowed me to represent the emotions of the video more successfully and write to sync points better.

 

This does however come with its challenges. After writing in all of the emotions that I saw in the video with chord progressions and short melodies I began to realise the cohesion of the sound itself was a bit haphazard. So to improve this I should add motifs throughout to create reoccurring themes. This method also left me with some not very musical results. So, to counteract this I should implement some other techniques, such as writing melodies across different octaves to create lift instead of raising the chord progression and this will provide me with a well founded bass chord that symbolises home and wellness and a tense rising melody that represents distance from the tonic chord via juxtaposition of their two pitches. To create intense movement I could also change the tonic chords with the rising melody, this would however require more work to resolve back to our home chord.

Design a site like this with WordPress.com
Get started