Having reached this level of understanding, the next phase in my research involved developing a computational model for arrayed point sources. In the next few posts I’m going to introduce you to version 1 of this model, and we are going to run some tests on it to explore different elements of array behaviour.

**Model Basics**

Towards the end of post #5 (“Phase & Interference”) I introduced two equations which together can be used to describe the phase and amplitude of the superposition of a number of coherent sources at any given frequency. All we need to calculate this superposition is knowledge of the phase and amplitude of the wavefront of each individual source at the listener position. If we know the relative positions of each source and the listener, and we can assume the sources to be performing as perfect, coherent (i.e. generating identical wavefronts, in phase, at exactly the same time), evenly radiating omni-directional point sources then the only other piece of the puzzle is the level of each individual source. From here obtaining the necessary variables for our equations becomes a simple geometry exercise.

The value that we are really interested in is the amplitude of this superposition. This tells us the SPL from the array, at a specific frequency, at a specific listening position. By repeating this calculation over a range of listener positions we can build up a picture of the level across the listener plane in one, two or three dimensions (for reasons I’ll discuss at a later point, my code for version 1 of the model is restricted to a one dimensional analysis). Alternatively we could instead repeat the calculation over a range of frequencies to build a picture of the frequency response at a specific position.

Using GNU Octave (a freeware programming language that is more or less the same as MATLAB) I have created scripts to run these calculations based upon user defined input variables to describe the array and listener positions. I share these scripts below.

**Assumptions**

It is important to be clear on the assumptions that this model is making. We have a made a number of simplifications, some of which are more important than others. This version of the model remains a way off from truly representing the behaviour of a modern line array system, which is fine so long as we understand the implications of this limitation.

In actual fact, our model is difficient in many of the same ways that very early, rudimentary attempts at developing line array systems were deficient. As we go forwards this will prove a very helpful similarity as it will enable us firstly to explore the problems with such rudimentary configurations, and then to explore how modern line array systems resolve those problems by refining later versions of the model to emulate subsequent advances.

The chief assumptions made in this version of the model are:

- Every cabinet in the array is modelled as a single individual point source, the origin of which is located centrally to the front face of the cabinet.
- Cabinets are presumed to be uniformly omni-directional at all frequencies.
- Cabinets are presumed to have a perfect, uniformly flat frequency response across the entire test range.
- SPL attributable to each individual cabinet is presumed to drop in perfect accordance with the inverse square law.
- The model calculates only the level directly attributable to the array and assumes a perfect free-field environment with no reverberation.
- The model ignores attenuation due to atmospheric absorption.
- The model assumes even atmospheric conditions, with no refractive effects and a fixed speed of sound at 343m/s.

I want to suggest that two the most important limitations here are:

- We are not modelling the directivity characteristics of modern line array cabinets (which are typically very narrow in their vertical dispersion).
- We are not modelling the isophasic behaviour created by wave guides in modern line array cabinets (we’ll see in a subsequent post just how important this is).

These will form the basis for later revision of the model, but for now this initial version of the model will suffice to help us understand some basic line array principles whilst maintaining a simplicity that will make this easier to get our heads round in the first instance.

**GNU Octave**

GNU Octave is a free programming language designed primarily for numerical computations. It’s a bit like a graphical calculator on steroids

Readers with an engineering or mathematics background may be familiar with MathWorks’ MATLAB software; Octave is essentially an open-source alternative to MATLAB, available for free (and completely legally) under the GNU General Public Licence. It has been designed to be compatible with MATLAB so scripts should in theory be portable between the two.

Reader’s wishing to experiment and follow along with subsequent posts using the model will need to acquire a copy of either Octave or MATLAB. The latest version of Octave can be downloaded from the GNU website here.

**Model Scripts**

Here is a downloadble zip file containing a number of .m files (MATLAB/Octave script files). To run the model you simply need to download the files and then extract them to your Octave/MATLAB working directory. Once in this directory these scripts become callable functions.

There a number of scripts included, but the main one which you will need to call to run the model is the *arrx.m* script. You don’t need to worry about the remaining scripts, these serve as subfunctions for the *arrx.m* script and are called from within that script so you shouldn’t need to call them directly, just make sure that they are all installed in the same directory.

The code is extensively commented so readers who are interested are at liberty to work through this and see how the script computes SPL based upon the original equations I present in post #5. The code is all open-source and provided freely for your use, feel free to modify this, re-use, build upon, or take snippets from the code for your own developments if you wish, my only request is that you provide attribution anywhere you use the code or any derivative of it (either in part or in full).

In my next post I will talk through the use of these scripts in a bit more detail before we begin to investigate some preliminary results.

]]>I want to consider now what these effects look like when we take a broader view, considering a) frequency response across the audible range at a given listening position, and b) level of a given frequency as we adjust the listening position.

**Comb-Filtering**

Let’s consider first a scenario in which we have two sources and a fixed listener position. In the previous post we examined this scenario at a specific frequency, now we are going to extend this to look at the effects of a broadband source.

As we recall from the previous post; with two sources it is likely that there will be a path length difference between the two sources and the listener (unless the listener happens to be situated on the plane that is exactly equi-distant from the two sources. There is a time delay between the arrival of wavefronts from the two sources corresponding to this path length distance. This time delay results in a corresponding phase angle between wavefronts from the two sources at a given frequency.

It is key to appreciate that the phase angle created is dependent upon frequency.

Frequency is directly related to wave length:

c = fλ

Where *c* is the speed of sound, *f * is frequency, and λ is the wavelength.

As *c *is a constant we can see that as *f* increases λ must decrease, so for higher frequencies the wavelength is shorter. One wavelength corresponds to a phase angle of 2π radians (or 360°), so a given difference in path length from the listener to two sources will therefore result in a bigger phase angle the higher the frequency.

The graph below demonstrates how a 5ms delay results in a different phase shift for waves of different frequencies. Notice how the 2kHz wave has shifted an entire wave cycle, whilst the 1kHz wave has only shifted 180°, and the 500Hz wave only 90°:

Calculating the phase angle at a given frequency and known path length difference becomes quite straightforward:

Φ = 2π(X_{1}-X_{2})/λ

If we now consider a variable frequency scenario it is evident that the phase angle will vary for different frequencies. We understand that the resultant interference is dependent upon the phase angle, so this will also vary based upon frequency. At certain frequencies interference will be constructive as the two sources will be in phase, at other frequencies it will be destructive as the two sources will be in anti-phase.

The resultant pattern is known as* comb-filtering*; as you will see from the example graph below, we get a series of notches in the frequency response which create a shape resembling a comb:

The precise frequencies of the various notches in the filtering pattern will vary depending upon the difference in path lengths, however the basic pattern of a series of alternating sharp notches and shallow peaks is consistent regardless of the relative positions of the sources and the listener.

In a world where we are extremely demanding of the quality of the frequency response from the majority of our components, putting multiple coherent sources into our environment feels like a extremely poor design choice. The comb-filtering effect seems to be pretty damning for the prospects of our audio quality.

It is worth noting that the example above is pretty extreme. It presumes only two sources (the absolute worst case scenario) and no reverberation. Adding additional sources does not fix the problem, although it will at least partially smooth out the worst effects; when any two of the sources might be in anti-phase, odds are that the remaining sources will be slightly more favourably aligned. If we put our system in a reverberant environment then reflected wavefronts will also help with this. The result will still not be a linear frequency response, but will at least smooth over the most severe notches.

**Varying Position**

Let’s now consider an example where we allow the listening position to vary. This is important because whilst it is easy to analyse comb-filtering at a specific position, this is not representative of a real-life listening environment. In reality we have to design our system not just to serve a single fixed listener position, but an entire listener plane (or even multiple planes) distributed throughout the venue.

We already know that there will be comb-filtering in the frequency domain at any given listener position. Whilst the pattern remains the same, the position of notches and peaks in the frequency domain will shift as listener position changes. This may appear random, but if we return to considering just a single frequency it becomes more predictable.

The following graphs look at the level of two frequencies as we adjust the listener position. In this example we are looking at a typical two speaker stereo system and the listener position is moving from left to right across the venue in front of the speakers:

We can see that at both frequencies there is a similar pattern, although with some important differences. Both examples are symmetrical about the centre-line of the venue. In both examples there are a series of troughs in the level as we move off axis. For the lower frequency 100Hz example these are wider but less frequent, whilst for the higher frequency example these become much narrower but also more frequent.

This is the general pattern: The higher the frequency the more frequent and narrower the troughs. The lower the frequency the wider and less frequent the troughs. In fact, if we went low enough in the frequency range we will eventually reach a point at which there are no more troughs.

Why would this be? It occurs because the wavelength is such that regarless of listening position the distance between the two speakers is never enough to put the two sources into anti-phase. In this scenario we would say that the two sources are *coupled*.

**Coupling**

Let’s consider the 100Hz example. In the example above the two speakers are clearly separated by enough that they will interfere negatively at some angles. However if we move them closer together this will cease to be the case.

At 100Hz sound has a wavelength of roughly 3.4m. Half a wavelength is 1.7m. If we position the two speakers 1.7m apart we will only get one zone of negative interference, this will be at its most significant when we are exactly 90° off axis as the wavefront from the second speaker will arrive precisely half a wavelength behind and so be in anti-phase.

If we move the speakers closer it is now no longer possible for the two speakers to be in perfect anti-phase; regardless of the position of the listener, the path length difference cannot be enough (at 100Hz or lower) to create a complete cancellation.

At 1.1m the two speakers are now within a third of a wave length. Remember that at a path length difference of a third of a wavelength or less the interference is always positive. This is the point at which I classify the two speakers to be coupled. Regardless of the listener position, we always get positive interference of some degree between the two.

This coupling is frequency dependent as it is based upon wavelength, so two speakers will couple at lower frequencies earlier than they will couple at high frequencies. At 1.1m the speakers are coupled at 100Hz and everything below, but are not yet coupled at higher frequencies. At 1kHz the speakers need to be much closer (around 11cm), and at 10kHz closer still (11mm). This obviously creates some challenges due to the physical size of loudspeakers. We cannot get the centre point of most loudspeakers 11mm or even 11cm apart due to the physical body of the cabinet, so whilst it is relatively easy to create coupling at lower frequencies we cannot acheive it (at least not by physical positioning of loudspeaker cabinets alone) at higher frequencies.

**Why λ/3?**

Here’s a question to consider; why are we considering coupling to occur at λ/3?

If you explore texts on line array you’ll see various values used relating to wavelength, for instance λ/2 and λ/4 both crop up a lot. If you dive too deeply into Fresnel analysis you’ll even start seeing some λ/2.7. What’s this all about?

As is hopefully becoming clear, wavelength (and by proxy; frequency) is a very important factor in these acoustic relationships. We’ll explore different terms relating to ratios of wavelength and discuss them as they come up, but for now I just want us to appreciate this most fundamental relationship between two loudspeakers.

λ/3 is the difference in path length at which we transition between positive and negative interference, so this is the ratio I am going to use to define coupling of two sources throughout my explorations.

]]>

We are surrounded all the time by point sources, however an array like this is a special case as the sources it is assembled out of are what we call *coherent sources*, by which we mean that they are producing identical waveforms in time and in phase with one another. Coherent sources interact with one another in some unique ways which we will now explore.

**Phase**

Let’s begin by considering an individual source, producing a single frequency. The sound wave being emitted has three important properties:

- Amplitude
- Frequency
- Phase

I’m going to assume for now that readers are familiar with amplitude and freqeuncy, but let’s take a deeper look at phase.

Phase describes the position in the wave cycle at any give snapshot in time and space. A single frequency waveform is described by a sine wave, so we identify phase as an angle corresponding to the current position in the wave cycle.

If we assume a fixed listener position, phase becomes determinate upon time. When listening to a single source like this phase itself does not really matter, it is constantly changing at a fixed pace (determined by the frequency) so whilst the rate of change is important the actual value of the phase at any given moment is entirely abitrary in so far as our perception is concerned. The best way to think of this is like we are shifting the origin point on a graph – the values change but the shape of the waveform does not.

This all changes when we add a second coherent source. Individually the phase of the two sources does not matter, but when listeners are exposed to both sources at the same time any relative phase difference between them becomes very important.

Phase differences like this will occur due to the listener position. Although the the two sources may be operating in phase, if the listener is not equi-distant from the two sources there will be a time delay (and thus a corresponding phase delay) between one and the other.

We can calculate this fairly simply:

t_{delay} = X/c

Where *t _{delay}* is the time delay between the source generating the wavefront and it arriving at the listener,

The phase angle (relative to the phase of the source) is then:

Φ = 2πft_{delay}

Where Φ is the phase angle in radians and *f* is the frequency. Note the importance of the frequency term, phase is directly related to frequency so at a different frequency with the phase angle will also be different.

If we measure Φ for both sources and then subtract one from the other we get the relative phase angle between the two.

Depending on the relative distances between the listener and the two sources the relative phase angle could be either small or large. One point to note is that as the phase angle describes a sine wave, the value of the wave is cyclical at a frequency of 2π. The value at Φ = 0 is the same as the value at Φ = 2π, which is the same as the value at Φ = 4π etc. We can therefore simplify matters by considering large relative phase angles to induce behaviour identical to the corresponding angle within the range of -π ≤ Φ ≤ π.

**Interference**

The relative phase angle is crucial to determining how the two sources interact. The sound waves from the two sources will be superimposed at the listener position, interfering either constructively or destructively depending upon the relative phase angle.

For a phase angle of Φ = 0 the two sources are “in phase” with one another and the interference is constructive. The waveform from one source lines up exactly with the waveform from the second source and the superposition produces a resultant waveform which is the same frequency, but double the amplitude of either of the individual waves. This is a increase in level of +6dB SPL over the level of either of the individual sources.

For a phase angle of Φ = ±π the two sources are in “anti phase” with one another and the interference is destructive. The waveform from one source is exactly the opposite of the waveform from the second source and the superposition produces a null waveform as they completely cancel one another out. This is a decrease in level of -∞dB.

For other phase angles the effects are less pronounced. The superposition will produce a waveform of the same frequency, but different amplitude and phase depending upon the relative phase angle between the two sources. For all -2π/3 < Φ < 2π/3 interference will be constructive. For all Φ < -2π/3 or Φ > 2π/3 interference will be destructive.

The case of Φ = ±2π/3 is a particularly interesting scenario in which the superposition produces a waveform of the same amplitude as the individual sources, with a phase angle half way between the two. In this scenario interference is neither constructive nor destructive, adding the second source makes no difference to the amplitude.

**Waves of Different Amplitude**

So far we have been making one major simplification; we have assumed that the waveforms arriving from the two sources are of the same amplitude. In reality this will not likely be the case. The two sources will be outputting the same level, but amplitude of each source will decay with distance in accordance with the inverse square law. In a scenario where we are considering the two sources to be non equi-distant we therefore need to factor in the relative level of the two sources.

If p_{source} is the amplitude of the waveform measured at 1m from the source then p at the listener position will be:

p = p_{source}/X

Where the amplitude of the two waveforms differs, calculating the the resultant waveform becomes slightly more complicated and the relationships we’ve described above break down. They still conform to the same general pattern of constructive and destructive interference, by the magnitude of the interference is less significant.

We can still determine the phase and amplitude of the resultant waveform, but the math becomes a little more complex: we use a process called phasor addition. A *phasor* (or *phase vector*) is a vectorised representation of the wave, described in polar co-ordinates.

**Polar Plots**

We are all used to cartesian plots, where we define the position of a point in terms of its value on two axis (usually* x* and *y*). Position can be defined in absolute terms, relative to the origin at (0,0), or in relative terms relating to the position of another known point.

A polar plot differs as we instead define the position in terms of an angle and a magnitude. Again, this can be absolute or relative.

Our single frequency waveform can be defined on this landscape as a vector using polar co-ordinates. Superimposing a second waveform of the same frequency then becomes a simple matter of vector addition. As we know the angle and magnitude of each vector we can calculate the resultantant vector via pythagorus and trigonometry to give us the phase and amplitude of the superimposed waveform.

Where p_{total} and Φ_{total} are the resultant amplitude and phase angle of the superimposed waveform, and p_{1/2} and Φ_{1/2} are the amplitude and phase of the original waveforms.

We can modify these equations for scenarios where we have more than two sources:

Where n_{total} is the total number of sources, and p_{n} and Φ_{n} refer to the amplitude and phase of each individual source.

**Conclusions**

So far we’ve developed equations to determine the resultant waveform at fixed positions and frequencies. In my next post we are going to go on to explore how these effects translate when we put multiple coherent sources into an environment involving variable frequency and listener positions.

]]>Let’s start by reviewing the difference between line sources and point sources, the two principal types of sound radiating source we encounter in live sound reinforcement.

**Point Source**

A point source is the most common form of source we encounter in the real world. A typical loudspeaker driver will behave as a point source. A human voice will behave as a point source. Most instruments will behave as a point source.

The above is in fact a bit of a simplification. The truth is that all of these are complex three-dimensional objects with three-dimensional radiating surfaces, but we can typically make the approximation to a point source at the majority of relevant listening distances.

We consider a point source to be a single infinitesimally small point in space, from which sound waves eminate spherically. Energy may not be equally distributed in all directions, but once it has begun its journey away from the source it will decay at a consistent rate in accordance with the inverse square law.

**The Inverse Square Law**

The inverse square law describes the propagation of a sound wave created by a point source.

We can imagine the wavefront travelling away from the source as the surface of an ever expanding sphere. The further we travel from the source the larger the surface area of the sphere.

If we consider a segment of the sphere, it has surface area ‘A’ and radius ‘r’. We can see that if we double the radius to 2r the surface area of the same segment has quadrupled to 4A, in fact we know from the equation for the surface area of a sphere that the surface area of the segment is proportional to the *square* of the radius (2^{2} = 4).

This segment represents a fixed portion of sound energy as it travels through the air. Sound intensity, which is the measure of sound power per unit area, is inversely related to the surface area of the segment. The larger the surface area the more spread out the energy is, so intensity is reduced the further we are from the source. Intensity is thus inversely proportional to the square of distance from the source (hence the *inverse square* law).

I α 1/r^{2}

The property of sound that we are most interested in however is not sound *intensity* but sound *pressure*. Pressure happens to be proportional to the square root of intensity, which tells us that sound pressure is inversely proportional to distance.

I α p^{2}

∴ p α 1/r

In simpler terms; a doubling of distance equates to a halving of the sound pressure. In decibels this is a decay of 6dB SPL every time we double the distance from the source.

**Line Source**

A line source differs from a point source in that the sound is considered to radiate cylindrically away from a line (rather than spherically away from a point).

As with point sources; no real life radiator is a true line source, however there are some that we can approximately to be a line source at certain listening distances. One really good example that has always stuck for me (I think it was from an environmental noise lecture) is traffic on a busy motorway – from a distance you don’t hear the individual cars, but the combined noise of the whole motorway which is one long line stretching across the landscape, and sound energy will propagate as per a line source.

The theoretical infinite length line source is perfectly focused with 0° dispersion angle in the plane of the line, effectively reducing the spread of energy to two dimensions.

In this scenario intensity becomes inversely proportional to the radius, so SPL is inversely proportional to the square root of distance from the source.

p α 1/√r

In decibel terms this corresponds to a decay of 3dB SPL for every doubling of distance vs. 6dB from a point source.

**The Two Advantages of Line Source**

What we see here are two distinct and potentially advantageous behaviours which apply to line sources.

*Controlled dispersion:*The line source is naturally focussed in one plane (typically vertically, but this depends on the orientation of the line). This gives us powerful control over the dispersion of sound energy in this plane. From a sound design perspective there is very little more desirable than control – we can avoid wasted energy that could contribute to environmental noise pollution and cause annoyance, whilst in indoor environments we are also able to reduce the excitation of the reverberant field which improves the direct-to-diffuse ratio (critical for intelligibility and clarity).*Reduced Level Drop Over Distance:*The line source decays at a considerably reduced rate vs. the point source, helping us to deliver a more consistent level further into the listening area from a single source. This can partially alleviate our dependancy on delay fills and helps to create a more linear sound field.

**The Problem**

If line sources seem like the greatest thing since sliced bread then there is only one problem; the behaviour described above is that of a theoretical, infinite length line source which cannot truly exist in the real world.

Finite length line sources can exist, and their behaviour will approximate that described above given the correct conditions. However, creating a line source loudspeaker is quite challenging. A small number of products do exist – one pro audio manufacturer I have seen and been quite impressed by is Alcons Audio, who produce a range of enclosures using ribbon drivers – but these are not without their issues, generally being quite delicate.

For these reasons, line *array* (which is what we see commonly deployed in professional audio applications) cannot be considered a true line source. A line array is an array of speakers which individually will behave as point sources, but when assembled together can infer some of the desirable characteristics of a line source.

Unfortunately whilst a thoeretical line source is not particularly difficult to comprehend, line arrays are far more complicated.

**Dispelling the Myths**

I’ll leave the discussion here for this post, however avid readers should note that we have already dispelled the first of the great line array myths; that line *array* is equal to line *source*. Although an understanding of line source is helpful in building a basic appreciation of the possible benefits of a line array, line array is not a line source and its behaviour can be very different (arguably conveying even greater benefits when deployed correctly). At best we might describe line array as a practical approximation of a line source.

As line array does not behave exactly as a line source we also need to let go of the notion that line arrays decay at the 3dB rate. This is a very common misconception, born out of the confusion between line array and line source. Line source decays at the 3dB rate, but the decay rate of line array is much more complicated and there is no simple rule of thumb to describe it.

]]>

One of the things I feel our industry has always been a bit poor at is academia. What we do sits on the divide between science and art, leading to a delicate balancing act between the objective and subjective when informing our understanding and design approaches. Sometimes we get this balance wrong. One of my major passions is to foster greater standards of excellence both personally and within the wider industry, and that means getting better at this. It shouldn’t be enough to know just a few rules of thumb and to let a clever bit of software do all the hard work for us – that’s not engineering, it’s just laziness – we should understand the systems we are deploying intimately in order to appreciate the complexities of the decisions we make as sound designers.

Line array is probably one of the toughest areas in this respect. When I began my own investigations into line array I found the available resources to be scant, unreliable and inconsistent. Finding rigourous research is challenging; there is information out there, but much of it is shallow and of dubious quality – in the best cases accurately presenting only a partial understanding of the phenomena, and in the worst cases actually promulgating untruths. There are one or two nuggets of gold out there however, and I present these below.

**Literature**

Please note that this is not really a thorough literature review, rather you should view it more as a directory of useful resources on the topic along with a few guiding comments to contextualise each.

*Elements of Acoustical Engineering – Second Edition (1957)*, Olson, Harry F. – The original line array paper; in fact there is a lot more than line array in here, but readers will find several chapters on line-source and various array configurations in section “II: Acoustical Radiating Systems”. Olson was mainly interested in directivity and describes equations for generating polar plots based upon each configuration. As I’ll discuss in a later post, the concept of a polar plot for a line array is somewhat flawed as the array does not posses linear directivity characteristics in the same way that a point source will do, but none-the-less we need to recognise the value of Olson’s work as an important early contribution to the field. I’m not sure of the legality so I won’t link directly here, but Google returns several copies of the full text on PDF for interested readers.

*Sound Fields Radiated By Arrayed Multiple Sources, 92nd Convention of the AES (1992)*, Heil, C. & Urban, M. – The 1992 AES convention paper in which Christian Heil and Marcel Urban first introduced their “criteria for arrayability”. The math is quite heavy, but it is worth it for the introduction of several key ideas. Heil and Urban describe the Fraunhofer and Fresnel zones* (more on this in a later post), as well as defining the border between the two and highlighting the frequency dependency. At the end of the paper the authors define several conditions for arrayability which were later adopted as part of L-Acoustics’ *Wavefront Sculpture Technology *(WST) criteria.

*Wavefront Sculpture Technology, 111th Convention of the AES (2001)*, Urban, M., Heil, C. & Bauman, P. – A development of the 1992 paper, Heil et. al go on to describe further WST criteria and the DOSC waveguide. The requirement for an isophasic wavefront (acheived via the waveguide) is defined, a critical stage in controlling the behaviour of an array at shorter distances/higher frequencies.

*TI 323: C4 System Specifications and Configurations, d&b audiotechnik* – I’ve long been big fan of the guys at d&b, I find they have a particular knack for explaining difficult concepts in comprehendible language. d&b’s C-Series includes a number of arrayable cabinets; although I wouldn’t describe these as true line array speakers as they lack isophasic waveguides which d&b later introduced with the Q-Series. The TI323 document for the C4 system offers some great introductory discussions of line array topics.

**Training**

Hands on training on line array seems to be quite rare and my experience is that most engineers are still doing the majority of their learning on the job. Several of the top end manufacturers do offer training courses, often (with some exceptions) at a fairly substantial cost and with a strong product focus. I’ve obviously not attended all of these, but here are two I wish to highlight for UK based readers.

*d&b audiotechnik Line Array Workshop* – d&b host a line array workshop at their UK facilities in Nailsworth. Although delivered by a manufacturer and with some leaning towards a focus on their product, there is also a lot of really good generic theory covered during the earlier part of this training course before it moves on to practical rigging and demonstrations using products from across d&b’s range. It is free to attend, though numbers are limited and there is regularly a waiting list for spaces. Steve Jones and Oran Burns who (at the time of writing) run the course are both excellent communicators and engineers whom I have massive levels of respect for. My one criticism of the course when I attended would be that there is a strong focus on the Fresnel model which I did not feel was fully contextualised with an explicit discussion of its limitations*.

*SFL Line Array Training* – As I work for SFL I cannot make any claim to impartiality here, however SFL’s Mark Payne runs an excellent line array course. I believe this course to be unique as the only non-manufacturer affiliated line array course in the UK. The course runs over 2-days; looking at line array theory and design on day one, and rigging and practice on day two. There is also a preceeding day covering a more general background in acoustics/electro-acoustics, and a follow up day covering digital systems networking including amplifier control networks. The course involves hands-on practice with current line array products from both d&b audiotechnik and L-Acoustics and is very reasonably priced at just £30 per day.

**Footnote:**

*At several points in the post I have referred to Fresnel analysis. For reasons I will discuss fully in a later post, I am wary of the Fresnel approach to understanding line array. The Fresnel model is very useful and I will use it at points during my investigations, however it is important to appreciate that the Fresnel model is a simplification of the reality (a very useful one that has been used to good effect by several pivotal researchers); it has its limitations and I am interested in taking my investigations beyond this. The model describes array behaviour quite accurately but does not in itself explain that behaviour on the level at which I want to understand it.

]]>**Early Line Array History**

The earliest known research into line array is thought to be the work for Harry F. Olson, an American electrical and acoustical engineer. Olson documented the basic directivity characteristics of line array configurations in *Elements of Acoustical Engineering*[1].

Olson’s work led to the development of column style loudspeakers which employed vertical stacks of mid-range drivers in a format that delivered highly directional audio propagation for vocal applications. Column speakers became a popular choice for speech reinforcement, particularly in reverberant spaces where the focussed directivity helps to minimise unecessary excitation of the reverberant field, resulting in an improved direct-to-diffuse ratio which is desirable for clarity and intelligibility. Column loudspeakers are still used for this purpose today, with class leading products from manufacturers such as Bose and K-Array among others.

Although the column loudspeaker became popular in certain applications, its use has always been restricted by inherent limitations regarding its performance. The line-source effect is observed to break down where driver spacing becomes significant relative to the wavelength. For low frequencies this is not problematic, however for higher frequencies it becomes impossible to locate drivers in close enough proximity to avoid destructive interference between them at some listening positions, resulting in the unwanted phenomena of comb-filtering. To improve high frequency performance smaller drivers in closer proximity are desired, however this simultaneously reduces low frequency performance for which larger drivers are preferable. The outcome is a trade off which limits the usable bandwidth of the system; we balance the high frequency requirement for small, tightly packed drivers, against the low frequency requirement for larger drivers, ultimately finding a compromise somewhere inbetween that performs well in the mid-range frequencies but tails off at high and low frequencies. This is acceptable for speech reinforcement in a reverberant environment, where the sacrifice in frequency response is balanced against the benefits of focused directivity, however it renders the column solution generally unsuitable for musical reinforcement where the reduced bandwidth and LF headroom represents a less palatable sacrifice.

Modern column speaker systems have come some way towards managing this trade-off, and there are a number of products that I have been really impressed by in recent years; delivering the directivity benefits of the column arrangement whilst performing well across enough of the frequency range to be practical for musical reinforcement in certain contexts. However, the frequency response does still remain more limited than for a genuine full range system, and the small column elements are limited in headroom. I consider it unlikely therefore that we are ever going to see the column loudspeaker performing on a level sufficient for its application in large scale sound reinforcement.

**Modern Line Arrays**

It was not until several decades later that Christian Heil, at the then fledgling L-Acoustics, revisited the concept of line array for larger scale audio reinforcement applications. Heil was interested in defining the criteria for a line array system that overcame the existing problems and presented a better solution than the cluster arrays that were commonly deployed for large events at the time. His 1992 paper comments:

Nowadays some concerts involve more than one hundred boxes. Unfortunately arraying boxes arises interference phenomenons which are not easily mastered. As far as linear frequency response, directivity control and power level at long distance are concerned, the results can be disastrous.

Large arrays of boxes, the characteristics of which are well known, lead most of the time to very surprising results. The question is to know whether or not it is possible to predict the behaviour of an array when the behaviour of each element is known.

Our purpose is to describe the sound field produced by arrays in such a way that criteria for arrayability can be defined.[2]

This and later papers set out the basic criteria for L-Acoustics’ WST (Wavefront Sculpture Technology), which are the fundamental requirements at the heart of all modern line array systems.

In 1994 Heil turned WST into reality with the launch of V-DOSC, featuring the DOSC waveguide. Although met with initial skepticism, L-Acoustics’ V-DOSC line array rapidly cemented itself as the first in a new generation of loudspeaker systems; with many rival manufacturers subsequently releasing their own line array solutions. V-DOSC proved so monumental a turning point in the history of audio reinforcement that it went on to have a nearly 20 year production run, only recently taken out of manufacture and still used for major tours and concerts all over the world. In 2012, two decades after Heil’s original paper, V-DOSC was the main loudspeaker product deployed in the London 2012 Olympic Stadium, proving just how lasting and significant a development in live sound reinforcement it has been.

Line array has continued to develop since the launch of V-DOSC, with virtually all the leading manufacturers launching their own ranges of line array solutions. Line array speakers have become smaller and lighter, enabling the deployment of line array in smaller venues. Solutions have become cheaper, enabling greater access to the technology such that some form of line array can now be found in a very large proportion of productions. Line array has also become very fashionable – perhaps too fashionable – leading to a *line array myth culture* in which it is not always properly understood and is sometimes heralded as something more than it is.

In my next post I intend to present a review of available literature as well as other resources, ultimately setting the scene for my own research which I will go on to share over future posts.

**Footnotes:**

*Elements of Acoustical Engineering – Second Edition (1957),*Olson, H.F.*Sound Field Radiated By Arrayed Multiple Sound Sources (AES 1992)*, Heil, C. & Urban, M

This is an educational project, through which I hope to help readers improve their understanding of line array behaviour. I’ll be focussing on answering two key questions: 1) how does line array *really* behave; and 2) how is that behaviour *really* achieved? I’ll be exploring this through a mixture of approaches including reviews of existing literature, theoretical discussion, and some of my own research. In particular I’m going to be sharing a computational model I have developed which I have found to reveal some particularly valuable insights. Some of what I have to share I genuinely believe to be a fresh, revelatory, and potentially new way of understanding line array – at least in so far as the information currently available in the public domain, I’ve not seen this approach replicated elsewhere – so for the live audio geeks I hope this will be particularly interesting.

**Why the line array project?**

Over the course of the last 20 years line array has shifted from an impractical, problem ridden concept, to become a dominant approach in live audio reinforcement, particularly for larger scale venues. Some of this is fashion, but a large part of it is also due to the fact that, having ironed out the initial problems, line array conveys a number of distinct benefits for audio reinforcement.

Whilst this rise in application and popularity has been meteoric, it seems that education regarding line array, it’s design and deployment has not entirely kept up. Demand for line array focussed education is high as more and more engineers find themselves regularly working with and designing line array systems, yet supply of quality education in this area is low and oversubscribed. I recall attending d&b’s UK based line array training course a couple of years ago, at the time this was run bi-monthly and had a 6 month waiting list. At SFL my managing director, Mark Payne, runs an excellent line array course twice a year which is consistently oversubscribed. Literature on line array is also limited (I’m intending to present a literature review in one of my following posts), and not all of what is available is reliable.

This educational void has long frustrated me. I remember in my early career being distinctly aware of the limitations of my own knowledge in this area, and the lack of reliable resources to help me educate myself. Eventually I turned to my own investigations, which form the basis of this project and I hope will be helpful in turn to others. In particular, I am passionate about correcting and dispelling some of the more popular line array myths that are out there.

Post #2 coming soon.

]]>A number of years ago I built this website as a professional portfolio to support my freelance career. Since 2012 it has lain rather dormant as I landed a full time position with SFL.

Recently I’ve been wanting an outlet for various thoughts and projects I’ve been working on, so I’ve relaunched the website as a blog. I’m not entirely sure what this is going to turn into yet, but I’ve got a few bits and pieces lined up focussing mostly on some pro audio topics I’ve been exploring, so you can expect this to begin in that vein at least to start with.

First (proper) post coming soon!

]]>