Eliminating D/A Conversion
Wave Field Synthesis

Eliminating D/A Conversion: Using Our Senses to Integrate

side bar for "Following The Digital Audio Chain", by Michael Karagosian
©2004 Karagosian MacCalla Partners All rights reserved worldwide


When a source produces a digital signal, it is common sense to eliminate as many D/A conversions as possible to retain the best possible signal quality. This might lead one to think that the ultimate experience of digital sound and picture is for the signal to be integrated by our ears and eyes, and not electronics.

Consider, for instance, the DLP® digital light processing technology from Texas Instruments. This technology is based on an array of digitally-controlled mirrors, each capable of being switched to either reflect light towards a lens, or away from a lens. This binary action of an individual mirror allows it to be toggled such that the average value of the light reflected towards the lens is of the desired intensity. Effectively, there is no filtering to provide integration of the light signal before it reaches our eyes (although one can argue that the lens itself introduces a small integrating effect). Thus, when we observe an image produced by a DLP® array, it is our eyes that integrate the digitally modulated light, and not the projection device. The result is an image having an effectively flat Modulation Transfer Function (MTF). (MTF can be thought of as a spatial frequency response.) Such images have been proven to be very revealing of detail that is otherwise obscured by the integrating action of the phosphor used in cathode ray tubes. This demonstrates the power of removing electronic integration from our sensory experience, and allowing our own senses to perform the integration.

It is entirely possible that we will have a similar experience when electronic integration of the digital signal is removed from audio devices. Electronic integration of the audio signal occurs whenever a D/A conversion is made. While the number of such conversions are reduced with digital interconnects, the only way to completely remove D/A conversions in the audio signal path is by employing a digital loudspeaker, allowing our ears to perform the signal integration.

Digital loudspeaker technology is only in its infancy, and more than one approach has been explored. The best approach to date for achieving a pure digital speaker requires unary digital signals. Unary coding is essentially parallel in nature. A binary 3, for instance, would be represented in unary as 1 1 1. That is, three different parallel signals would be presented. By performing the binary to unary conversion at over-sampled rates, and by using the unary representation to stimulate an array of equally-weighted binary pressure transducers, instantaneous sound pressure waves can be created that require integration by our ears. Notably, patents for unary technology have been assigned to both 1 Ltd and Texas Instruments.

However, it remains to be demonstrated that integrating the digital signal with our ears provides us with the best possible listening experience. "The ear is non-linear, and not necessarily the best low pass filter" suggests Tony Hooley, president of 1 Ltd, whose company has been assigned the base patents for unary loudspeaker technology. Tony envisions unary transducer technology will make possible tiny sound reproducers for headsets and hearing aids. But then again, maybe we'll be in for a surprise.



Wave Field Synthesis

side bar for "Following The Digital Audio Chain", by Michael Karagosian
©2004 Karagosian MacCalla Partners All rights reserved worldwide


Wave Field Synthesis (WFS) is based on the work in wave propagation by the Dutch physicist Christian Huygens in the 17th century. For more than 20 years, the Technical University of Delft in The Netherlands has promoted Huygen's principle as a technique for reproducing sound fields. Huygens stated that wavefront of a source at any instant can be reproduced by many sources located on the perimeter of the wavefront at the prior instant. This principle can be used to synthesize a sound field that provides a dimensional and realistic emulation of the original sound source.

A compelling feature of WFS is its ability to extend the sweet-spot across the entire listening space. Demonstrations have shown that it can provide better spatial sound than the multi-channel systems in use today. Considerable development of a WFS audio system has taken place at the Franhofer Institute for Digital Media Technology in Germany, led by Prof. Dr. Brandenburg, Dr. Thomas Sporer, and Dr. Sandra Blix. The Franhofer system is named IOSONO, and has been installed in several venues, including the motion picture theatre "Lindenlichtspiele" in Ilmenau, Germany.

WFS, however, is neither simple nor cheap with today's technology. It requires tens or possibly hundreds of individually amplified speakers, and it requires extensive digital signal processing programmed with certain parameters of the acoustical environment. However, WFS is realizable with available technology, as the IOSONO system has demonstrated.

Where can WFS technology lead to? Prof. Dr. Brandenburg explains his vision: "The next years will bring a paradigm shift in high quality audio& We will move from signals reproduced one by one at the highest possible quality to what I call the "intelligent stereo set." This will take all kinds of input (two channel stereo, 5.1, IOSONO WFS etc.) and use DSP to recreate the best possible user experience. It will know about the number and position of the loudspeakers by means of an automatic setup procedure. It will know the room size and room acoustics parameters and use a digital bus or wireless transfer to get the digital signals to active loudspeakers."

More on IOSONO can be learned at http://www.iosono-sound.com.