Simple System for Realistic Impacts

bouncy-ball-clipart-1

Alright, I admit it – I dropped the ball.  I’ve been way wrapped up in DOing VR sound design and totally neglecting the task of blogging about it.  In honor of that, I thought I’d write about creating sounds tied to in-game physics… like a bouncing ball.

What “touch controllers” mean for sound design.

With VR systems like the Oculus and Vive, there are new controllers (one for each hand) which allow for players to interact with objects in a much more realistic way than previously possible – you can pick things up, drop them, throw them, etc.  Of course, all of these objects will probably need sound, and it’s impossible to predict where or how they’ll collide with things (will the player drop it straight down? throw it across the room? roll it down some stairs?).  The most realistic way to design sound for these is to tie-in sounds to the actual physics of the game.

If the thought of tackling that complexity is getting you down, I’m here to offer you a cure for your depression – a little thing I call “The SSRI.”

Simple System for Realistic Impacts (SSRI)

Categories:

The SSRI breaks sounds for an object down into 4 categories:  Hard, Medium, Soft, and Stop.  The first three should correspond to hard, medium, and soft impacts for the object, hard being the loudest and each next category quieter. The “Stop” category is the final sound an object makes as it comes to rest (often a few quick subtle impacts and/or possibly a sliding sound, depending on the object).

Variation:

Since it’s likely an object might get knocked around more than once, we want to make sure there’s enough variety in the sound so that it doesn’t sound the same every time.  For that we do 2 things:

  1. Make a handful of 6-10 similar (but distinct) sounds for each category, and set them up to be randomly selected when called
  2. Set up a little pitch/volume modulation per instance (2-5% +/- usually does the trick) to add greater variance.

Even with a minimum of 6 sounds per category, that’s 24 raw sounds, and coupled with the modulation it should be enough variety to make any repeats of sounds unlikely to be detected.

Physics:

Now, we have to do a bit of programming to tell the machine how to use these sounds. Whenever we get a hit event for the object colliding with something, we need one of these sounds.  Which one we need will depend on how hard it hits, which we can determine via its velocity. Once we know that, our logic will look something like this:

SSRI

Upon impact, we check to see if the velocity is greater than our lower-threshold for playing the “Hard” sound.  If it isn’t, we check if it’s greater than our threshold for playing “Medium,” and so forth.  When our check is “true,” the corresponding sound fires.

 
A few notes:

– Getting the best values to make the switch between these may take some trial and error, depending on settings (e.g. the assigned mass of the object), but so far I’ve found that once you have them they tend to work pretty well for similar objects.

– It’s important to make sure the threshold for “Stop” is a bit greater than 0.  If it’s too low, it’s possible to get a bunch of collisions (and from that, a bunch of noise from sounds firing on those collisions).

– It may be worthwhile to put a time-delay on how quickly these can re-fire (I find something in the range of 0.1 to 0.2 seconds seems to work pretty well).  This makes sure that, if there happen to be multiple collisions happening very quickly, you don’t get weird glitchy machine gun rapid-fire of your sounds.

– When I originally built this, I had the thought to modulate the volume based on velocity as well.  When I got it built this far, though, it worked so well that doing so didn’t seem necessary, and I opted to keep it simple.  Your mileage may vary.

Wrapping up

So that’s it.  Pretty simple, right?  I hope The SSRI helps some of you out there in your sound design quests.  I’ll be talking more about touch controllers and the unique sound design situations that come about as a result of them in the next post.

Rethinking Sound Design for Virtual Reality Games

Everything you know is wrong.  OK, not everything, but if you’ve done sound design for anything else, you’re going to have to think differently to design sound for Virtual Reality.  We’re not designing sound for a show people passively sit and watch or a game that’s up on a screen in a room.  We’re more like the sound designers for the matrix – a virtual world that people can move around in and interact with.

What’s different?

Unlike surround sound or even simple stereo, which typically assume a fixed vantage point for the observer, in VR the vantage point changes constantly as the observer moves around in the virtual environment (or even simply turns their head).  Instead of thinking of sound being panned in a fixed point in the sound image as we would in stereo or surround sound, we instead place sounds into the 3D space of the virtual world.  Each sound is then processed through a HRTF (Head Related Transfer Function) which creates the illusion of the sound emanating from a point in space relative to the listener.  Similarly to how the visual input changes with the observer’s movement in VR, HRTF processing allows sound to change realistically with the observer’s ever-changing vantage point.

(I’ll talk more about HRTF in a future post.)

We are stereo. Sound is mono.

When someone speaks to you, when you set your phone down on a table, when your alarm clock goes off… all of those sounds essentially come from a single point in space.  They are monophonic.  The idea of stereo sound was to create the illusion of various points of origin for a sound, panned along a line between the 2 speakers.  Surround sound came along later and extended that line into a circle around the listener.  

In VR, though, HRTF processing is going to handle making sounds seem like they’re coming from wherever they’re placed in the virtual world (not just around us, like in stereo and surround sound, but above and below, too!).  Since HRTF is doing the work of “spatialization” for us, as we design sounds for VR we no longer need to think about panning.  Instead, we’re back to treating sound as it occurs in actual reality – in mono.  

What about reverb?

Reverb is most believable when it sounds like it’s all around us.  As such, since we’re creating mono sounds, having reverb built-in to our sounds and then spatialized so that it seems like it’s coming from a single point in space is not going to create a particularly realistic effect.  Instead, in many (probably most) cases, it’s going to be better to create our sounds “dry,” without reverb, and add reverb after the fact within the game.

Ambient sounds all around

Another difference in developing sounds for VR is that it no longer makes sense to have ambient sounds that are static.  It can break the illusion of reality if the player turns or moves and the ambient sound doesn’t shift accordingly.  Therefore, instead of thinking about ambience as a bed of sound that’s just sitting there, it may serve us better to think about ambience as the sum of multiple, quiet sounds originating from several points spread around an environment.  

 

These are just a handful of the initial differences I’ve come across as I’m starting to explore this new paradigm of sound design.  There’s lots more to cover, and I’m sure lots more to be discovered.  


Cheers,

Earl

Welcome to my VR Sound Design blog

headphone-159569_1280

Welcome to my blog about VR sound design.

I recently started designing sound for a VR game, and in trying to research that process I’ve come that there’s very little information about it available so far.  It’s relatively new, and there are lots of unknowns – it’s an exciting new frontier in audio!

I decided to start this blog to help fill the info void.  I’ll be posting things I discover, techniques I develop, and links to relevant information that I dig up along the way.  I hope that, in doing so, I might help others along their journey as we explore these largely-uncharted waters.

Stay tuned!

Cheers,

Earl