|
In the Forums... |
Posted: April 10, 2000 Written by: Tuan "Solace" Nguyen Now that you know how 3D sound works in conjunction with your ears, let's get down to the 3D Sound APIs that create the magic. DirectSound3D Developed to be included in the DirectX library, Microsoft's own API was designed to offer basic Volume, Panning and Doppler shifts. Unlike the other Microsoft API, it decided that this time, DS3D version 5.0 will allow developers to use whatever property sets and additional code they wish to extend DS3D. This will allow DS3D access to proprietary hardware with exclusive effects. If your soundcard is DS3D capable, it can take DS3D calls to it and use it in any manner it wants to render the sound output. If your soundcard does not support DS3D calls in hardware, the DS3D will render the sound source output itself. Although you may think that this is great and you won't need to fork out those mint bills for a new soundcard, DS3D in software produces nothing close to what the actual sound would sound like in 3D had it been rendered by hardware. It also requires that you give up great amounts of CPU cycles to render the sound. Although this is one drawback of DS3D in software, it does offer more efficient rendering algorithms than AC-3 and DTS. And two levels of HRTFs are packed into DS3D also to help achieve the effect. DS3D in DirectX 7.x also includes voice management for capable hardware. This lets the application determine the number of 3D sound streams the soundcard can render simultaneously. And if there is an over request exceeding the maximum streams that the soundcard can handle, then the stream is passed onto the CPU to render. If this is still not possible, the 3D audio stream is down-mixed to simple stereo. Plain stereo signals don't require complex algorithms to filter and therefore a soundcard can render many more stereo streams than 3D audio streams. Aureal3D A3D is an API developed in-house by Aureal Semiconductor. It is hailed as the most effective 3D sound API and those who have heard it would agree that it is nothing less than amazing. Aureal's method is purely based on mathematics. It decided that the best way to achieve 3D sound accurately was to accurately model the on screen environment and manipulate the sound source accordingly. A3D2.0 actually takes into account the geometry of the game scene and uses wavetracing to calculate the sounds reflections and occlusions in real-time. Occlusions are what you hear when a sound is being heard with another object like a wall in the way. A muffled and low volume effect. It also is capable or rendering height (z-axis) relations as well. A3D1.0 was just simply based on HRTFs with front and rear effects. With A3D3.0, Aureal has added geometric reverb effects, support for EAX1.0 and 2.0, Dolby Digital decoding and MP3 decoding, and volumetric sounds such as large crowds that appear on a soundstage too big for point sources. All this is defined in different algorithms that Aureal has developed for different speaker modes: satellites, quad speakers, monitor speakers, and headphones. As with all surround techniques, there is a sweet spot that one must be situated in order to hear the most convincing effect. Since using earphones always puts you in optimum position, Aureal naturally recommends this as being the best method to hear the most realistic effect. If you decide to setup four speaker mayhem, be aware that HRTFs are only used on the front speakers. But the rears still augment the effect better than a two front speaker setup. |
||
|
---|