Your Spectrum Measurements May Be More Average Than You Know

  I’m not talking about the quality, just the variance

The basic amplitude accuracy of today’s signal analyzers is amazingly good, sometimes significantly better than ±0.2 dB. Combining this accuracy with precise frequency selectivity over a range of bandwidths—from very narrow to very wide—yields good power measurements of simple or complex signals. It’s great for all of us who seek better measurements!

However, if you’re working with time-varying or noisy signals—including almost all measurements made near noise—you’ve probably needed to do some averaging to reduce measurement variance as a way to improve amplitude accuracy.

As a matter of fact, you may already be doing two or more types of averaging at once. Here’s a summary of the four main averaging processes in spectrum/signal analyzers:

  • Video bandwidth filtering is the traditional averaging technique of swept spectrum analyzers. The signal representing the detected magnitude and driving the display’s Y-axis is lowpass filtered.
  • Trace averaging is a newer technique in which the value of each trace point (bin or bucket) is averaged each time a new sweep is made.
  • The average detector is a type of display detector that combines all the individual measurements making up each trace point into an average for that point.
  • Band power averaging combines a specified range of trace points to calculate a single value for a frequency band.

Depending on how you set up a measurement, some or all of these averaging processes may be operating together to produce the results you see.

The use of multiple averaging processes may be desirable and effective, but as I mentioned in The Average of the Log is Not the Log of the Average, different types of averages—different averaging scales—can produce different average values for the same signal.

How do you make the best choice for your measurement, and make sure the averaging scales used are consistent? The good news is that in most cases there is nothing you need to do. Agilent signal analyzers will ensure that consistent averaging scales are used, locking the scale to one of three: power, voltage or log-power (dB).

In addition, Agilent analyzers choose the appropriate scale depending on the type of measurement you’re making. Selecting marker types and measurement applications—such as adjacent channel power, spurious or phase noise—gives the analyzer all the information it needs to make an accurate choice.

If you’re making a more general measurement in which the analyzer does not know the characteristics of the signal, there are a couple of choices you can make to ensure accurate results and optimize speed.

When you want to quickly reduce variance and get accurate results—regardless of signal characteristics—use the average detector.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

Beyond the accuracy it provides for all signal types, the average detector is extremely efficient at quickly reducing variance and is very easy to optimize: If you want more averaging, just select a slower sweep speed. The analyzer will have more time to make individual measurements for each display point and will automatically add them to the average. Simply keep on reducing sweep speed until you get the amount of averaging you want.

The exception to this approach is when you’re measuring small CW spurs near noise, and in that case you may want to use a narrower video bandwidth filter for averaging.

With these two approaches you’ll improve both the quality of your signal measurements and the variance, with a minimum of effort and no accidental inconsistencies. Once again, a combination of techniques provides the desired results. For more detail, see Chapter 2 of the updated Application Note 150 Spectrum Analysis Basics.

 

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

YIG Spheres: The Gems in your Signal Analyzer

  Like an Italian sports car, they combine impressive performance and design challenges

In a 1947 speech, Winston Churchill remarked “…it has been said that democracy is the worst form of government except all those other forms…” Today, I suspect that some microwave engineers feel the same way about YIG spheres as microwave resonator elements. They’re an incredibly useful building block for high-frequency oscillators and filters, but it takes creativity and careful design to tame their sensitive and challenging nature.

The “G” in “YIG” stands for garnet, a material better known in gemstone form. A YIG or yttrium-iron-garnet resonator element is a pinhead-sized single-crystal sphere of iron, oxygen and yttrium. These spheres resonate over a wide range of microwave frequencies, with very high Q, and the resonant frequency is tunable by a magnetic field.

That makes them perfect as tunable elements for microwave oscillators and filters, and in this post I’ll focus on their role in the YIG-tuned filters (YTFs) used as preselectors in microwave and millimeter signal analyzers.

These analyzers typically use an internal version of the external harmonic mixing technique described in the previous post. It’s an efficient way to cover a very wide range of input frequencies using different harmonics of a microwave local oscillator—itself often YIG-tuned!

However, mixers produce a number of different outputs from the same input frequencies, including high-side and low-side products plus many others, typically smaller in magnitude. These undesired mixer products will cause erroneous responses or false signals in the spectrum analyzer display, making wide-span signal analysis very confusing.

One straightforward solution to this problem is a bandpass filter in the signal analyzer that tracks the input frequency. Here’s an example:

The yellow trace is the frequency response of a YIG preselector bandpass filter as it appears at the signal analyzer IF section. The blue trace shows the raw frequency response, with the preselector bypassed.

The yellow trace is the frequency response of a YIG preselector bandpass filter as it appears at the signal analyzer IF section. The blue trace shows the raw frequency response, with the preselector bypassed.

YIG technology enables the construction of a tunable preselector filter, wider than the widest analyzer RBW, whose center frequency can be synchronously swept with the analyzer’s center frequency. This bandpass filter rejects any other signals that would cause undesirable responses in the analyzer display.

Problem solved! So why the Churchillian perspective on YIGs? It’s a matter of the costs that come with the compelling YIG benefits:

  • Sensitivity is reduced: The preselector’s insertion loss has a direct impact on analyzer sensitivity.
  • Stability and tuning are challenging: The preselector’s wide, magnetic tuning range comes with temperature sensitivity and a degree of hysteresis. It is a challenge to consistently tune it precisely to the desired frequency, requiring careful characterization and compensation.
  • Bandwidth is limited: The preselector passband is wider than the analyzer’s widest RBW filter, but narrower than some wideband signals that would normally be measured using a digitized IF and fixed LO.

Fortunately signal analyzer designers have implemented a number of techniques to optimize preselector performance and mitigate problems, as described in Agilent Application Note 1586 Preselector Tuning for Amplitude Accuracy in Microwave Spectrum Analysis.

An alternative approach is simply to bypass the preselector for wideband measurements and whenever conditions allow. Many measured spans are not wide enough to show the undesirable mixing products, or the unwanted signal responses can be noted and ignored.

So, just as with democracy and its alternatives, YIG preselectors offer compelling benefits that far outweigh their disadvantages.

If you’d like to know more about harmonic mixing and preselection, see Chapter 7 of the new version of Application Note 150 Spectrum Analysis Basics.

 

Share
Tagged with: , , , , , , , , ,
Posted in Aero/Def, Microwave, Millimeter, Signal analysis

External Mixing and Signal Analysis: You may be Doing it Already

  Where should your first mixer be when you’re making high-frequency measurements?

In Torque for Microwave & Millimeter Connections, I complained that engineering was inherently more challenging at microwave and millimeter frequencies. One reason: many factors that can be ignored at lower frequencies really begin to matter. Therefore, it’s important to consider all the tools and approaches that can help you optimize measurements at these frequencies, and this includes external mixing.

In my years of working at lower frequencies I knew about external mixing, but I always thought of it as a rather exotic and probably difficult technique. In reality, it’s a straightforward approach that has significant benefits, and modern hardware is making it both better and easier.

I also realized that I had been using external mixing for years, but at home: the low noise block (LNB) downconverter in my satellite dish. Satellite receivers use external mixing for many of the same reasons engineers do.

For satellite receivers and signal analyzers it’s a matter of where you place the first mixer. In analyzing microwave and millimeter signals, the first signal-processing element—other than a preamplifier or attenuator—is generally a mixer that downconverts the signal to a much lower frequency.

There’s no requirement that this mixer be inside the analyzer itself. In some cases there are benefits to moving the mixer outside the analyzer and closer to the signal under test, as shown below.

In external mixing, the analyzer supplies an LO signal output and its harmonics are used by the mixer to downconvert high frequencies from a waveguide input. The result is sent to the analyzer as an IF signal that’s processed by the analyzer’s normal IF section.

In external mixing, the analyzer supplies an LO signal output and its harmonics are used by the mixer to downconvert high frequencies from a waveguide input. The result is sent to the analyzer as an IF signal that’s processed by the analyzer’s normal IF section.

External mixing has a number of benefits:

  • Flexible, low-loss connection between signal and analyzer. The vital first downconverting element can be placed at the closest and best location to analyze the signal, typically with a waveguide connection. The analyzer can be located for convenience without a loss penalty from sending high frequencies over a distance.
  • Frequency coverage. External mixers are available for frequencies from 10 GHz to the terahertz range, in passive and active configurations.
  • Cost. Signal analysis may be needed over only a limited set of frequencies in the microwave or millimeter range, and a banded external mixer can extend the coverage of an RF signal analyzer to these frequencies.
  • Performance. Measurement sensitivity and phase noise performance can be excellent due to reduced connection loss and the use of high-frequency and high-stability LO outputs from the signal analyzer.

Some recent innovations have made external mixers easier to use and provide improved performance. These “smart” mixers add a USB connection to the signal analyzer to enable automatic configuration and power calibration. The only other connection needed is a combined LO output/IF input connection, as shown below.

Agilent’s M1970 waveguide harmonic mixers are self-configuring and calibrating, requiring only USB and SMA connections to PXA and MXA signal analyzers.

Agilent’s M1970 waveguide harmonic mixers are self-configuring and calibrating, requiring only USB and SMA connections to PXA and MXA signal analyzers.

The new mixers enhance ease of use, including automatic download of conversion loss for amplitude correction. Nonetheless, they can’t match the convenience and wide frequency coverage of a one-box internal solution that has direct microwave and millimeter coverage. And because external mixing doesn’t include a preselector filter, some sort of signal-identification function will be necessary to highlight and remove signals generated by a mode—LO harmonic or mixing—other than the one for which the display is calibrated (more on this in a future post).

External mixing is now a supported option in Agilent’s PXA and MXA signal analyzers. This is described in the new version of Application Note 150 Spectrum Analysis Basics and in the application note Microwave and Millimeter Signal Measurements: Tools and Best Practices.

 

 

Share
Tagged with: , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

Comparing Coax and Waveguide

  Making the choice for microwave and millimeter connections

I haven’t used waveguide very much, but it’s been an interesting technology to me for many years. I always enjoyed mechanical engineering and I’ve done my share of plumbing—everything from water to oil to milk—so waveguide engages my curiosity in multiple domains.

Coaxial cables and connectors are now readily available at frequencies to 110 GHz and at first glance they seem so much easier and simpler than waveguide. I wondered why waveguide is still in use at these frequencies, so a couple of years ago, while writing an application note, I spoke to electrical and mechanical engineers to understand the choices and tradeoffs.

It’s perhaps no surprise that there are both electrical and mechanical factors involved in the connection decision. At microwave frequencies, and especially in the millimeter range and above, electrical and mechanical characteristics do an intricate dance. Understanding how they intertwine is essential to making better measurements.

Coaxial connections: flexible and convenient. Direct wiring, in its coaxial incarnation, is the obvious choice wherever it can do the job acceptably well. The advances of the past several decades in connectors, cables and manufacturing techniques have provided a wide range of choices at reasonable cost. Coax is available at different price/performance points from metrology-grade to production-quality, and flexibility varies from extreme to semi-rigid. While the cost is significant, especially for precision coaxial hardware, it is generally less expensive than waveguide.

Coax can be an elegant and efficient solution when device connections require some kind of power or bias, such as probing and component test. A single cable can perform multiple functions, and the technique of frequency multiplexing can allow coax to carry multiple signals, including signals moving in different directions. For example, Agilent’s M1970 waveguide harmonic mixers use a single coaxial connection to carry an LO signal from a signal analyzer to an external mixer and to carry the IF output of the mixer back to the analyzer.

All is not lost for waveguide. Indeed, loss is an important reason waveguide may be chosen over coax.

Waveguide: power and performance. Power considerations, both low and high, are often the reasons engineers trade away the flexibility and convenience of coax. In most cases, the loss in waveguide at microwave and millimeter frequencies is significantly less than that for coax, and the difference increases at higher frequencies.

For signal analysis, this lower loss translates to increased sensitivity and potentially better accuracy. Because analyzer sensitivity generally declines with increasing frequency and increasing band or harmonic numbers, the lower loss of waveguide can make a critical difference in some measurements. Also, because available power is increasingly precious and expensive at higher frequencies, the typical cost increment of waveguide may be lessened.

On the subject of power, the lower loss in waveguide comes with high power-handling capability. As occurs with small signals, the benefit increases with increasing frequency.

As you can see from the summary below, other coax/waveguide tradeoffs may factor in your decision.

Comparing the benefits of coaxial and waveguide connections for microwave and millimeter frequency applications.

Comparing the benefits of coaxial and waveguide connections for microwave and millimeter frequency applications.

Mainstream technologies are extending to significantly higher frequencies and I have already wondered if you can push SMA cables and connectors to millimeter frequencies. In some cases, however, the question may be whether cables of any kind are the best solution, and whether it’s time to switch from wiring to plumbing.

Several application notes are available with information on measurements at high frequencies, including Microwave and Millimeter Signal Measurements: Tools and Best Practices.

 

Share
Tagged with: , , , , , , , , , , ,
Posted in Aero/Def, Microwave, Millimeter, Signal analysis, Signal generation

Signal Analysis: What Would You Do With an Extra 9 dB?

  Mapping the benefits of noise subtraction to your own priorities

Otto von Bismarck said that “politics is the art of the possible” and he might as well have been speaking about RF engineering, where the art is to get the most possible from our circuits and our measurements.

The previous post on noise subtraction described a couple of ways that RF measurements could be improved by subtracting most of the noise power in a measuring instrument such as a spectrum or signal analyzer. In some instruments this process is now automated and it’s worth exploring the benefits and tradeoffs as a way to understand the limits of what’s possible.

In the last post I briefly mentioned sensitivity and potential speed improvements and in this post I’d like to discuss one example of what a potent technique noise subtraction can be. One diagram can summarize the benefits and tradeoffs for this example, but it’s an unusual format and a little bit complex so it deserves some explanation.

Accuracy vs. SNR for noise-like signals and a 95% coverage interval. The blue curves show the error bounds for measurements with noise subtraction and the red curves show the bounds without noise subtraction. Using subtraction provides a 9.1 dB improvement in the required SNR for a measurement with 1 dB error.

Accuracy vs. SNR for noise-like signals and a 95% coverage interval. The blue curves show the error bounds for measurements with noise subtraction and the red curves show the bounds without noise subtraction. Using subtraction provides a 9.1 dB improvement in the required SNR for a measurement with 1 dB error.

I didn’t produce this diagram and confess that I didn’t understand it very well at first glance. The 9.1 dB figure annotating the difference between two curves sounds impressive, but just what does it mean for real measurements?

Let me explain: This is a plot of accuracy (y-axis) vs. signal/noise ratio (SNR, x-axis) for a 95% error coverage interval and for noise-like signals. Many digitally modulated signals are good examples of noise-like signals.

The red curves and the yellow fill indicate the error bounds for measurements made without noise subtraction. Achieving 95% confidence that the measurement error will be less than 1 dB requires an SNR of 7.5 dB or better, keeping error below 2 dB requires an SNR of 3.5 dB, and so on. Note that the mean error is always positive and increases rapidly as SNR is degraded.

Now look at the blue curves and green fill to see the benefit of noise subtraction. In this example the effectiveness of the noise subtraction is sufficient to reduce noise level by about 8 dB, a conservative estimate of the performance of this technique, whether manual or automatic.

First, you can see that the mean error is now zero, removing a bias from the measurement error. Second, the required SNR for 1 dB error has been reduced to -1.6 dB, a 9.1 dB improvement from the measurement made without noise subtraction.

I have complained in the past about the effects of noise on RF measurements and it’s a frustration that many share. However, this example demonstrates the other side of the situation: Subtracting analyzer noise power, either manually or automatically, with technologies such as noise floor extension (NFE) provides big performance benefits.

What would you do with an extra 9 dB? You might use it to improve accuracy. You could trade some of it away for faster test time, improved manufacturing yields, a little increased attenuation to improve SWR, or perhaps eliminate the cost of a preamplifier. Use it well and pursue your own version of “the art of the possible.”

 

Share
Tagged with: , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Adding Measurement Sensitivity by Subtracting Noise Power

  If noise power is part of your problem, less is more

An earlier post on measuring signals near noise described how noise power in a signal measurement adds to the measured signal power and thus creates an error component. The error can be significant, even for signals well above noise, due to the inherent accuracy of modern signal analyzers.

Fortunately, this additive error process can be reversed in many measurements, providing an improvement in both measurement accuracy and effective sensitivity. This performance improvement is especially important when small signals must be measured along with larger ones. That is, when sensitivity can’t be improved by reducing attenuation or adding preamplification.

The key to these improvements is knowledge of the amount of added noise power, and in most cases this corresponds to the noise floor of the signal analyzer. To correct the typical power spectrum measurement, the average noise floor power in the analyzer’s RBW filter is subtracted from each point of a power spectrum measurement. An example of that process is shown in the figure below.

Two spectrum measurements of a low power seven-tone signal.  Amplitude decreases 3 dB for each tone and the scale is 3 dB/div. The blue trace shows the benefit of subtracting most of the signal analyzer noise using Agilent’s noise floor extension (NFE) technique.

Two spectrum measurements of a low power seven-tone signal. Amplitude decreases 3 dB for each tone and the scale is 3 dB/div. The blue trace shows the benefit of subtracting most of the signal analyzer noise using Agilent’s noise floor extension (NFE) technique.

Subtracting the analyzer’s noise power contribution is simple trace math on a power (not dB) scale, but precisely determining that power is not so simple.

The direct approach is to disconnect the signal under test, perform a highly averaged noise floor measurement, reconnect the signal and measure it with the noise subtracted. This approach is accurate and effective but can be very slow. In addition, the noise floor measurement must be re-done if the measurement configuration is changed in any way or if measurement conditions change, especially temperature.

A more sophisticated technique involves accurately modeling the signal analyzer noise floor under all measurement configurations and operating conditions then using that information to correct signal measurements on the fly. This technique is not quite as effective as individual noise floor measurements but it is much faster and more convenient. In addition, it requires no user interaction, imposes no speed penalty, and in the Agilent PXA signal analyzer it can provide up to 12 dB of improved sensitivity, as shown above.

This noise floor extension (NFE) technique has been available as a standard feature in the PXA signal analyzer for several years, and is now available as an option for the MXA signal analyzer. In the MXA this option is a license key upgrade, available for all existing MXA models and implemented through an automated self-calibration that takes 30 minutes or less.

Over the full frequency range of the MXA, the NFE option produces improvements such as those shown here.

The noise floor of an MXA signal analyzer is shown from 10 MHz to 26 GHz, both with and without the benefit of NFE noise power subtraction. Effective sensitivity is improved over a wide frequency range by approximately 9 dB with no reduction in measurement speed and no need for separate noise floor characterization measurements.

The noise floor of an MXA signal analyzer is shown from 10 MHz to 26 GHz, both with and without the benefit of NFE noise power subtraction. Effective sensitivity is improved over a wide frequency range by approximately 9 dB with no reduction in measurement speed and no need for separate noise floor characterization measurements.

I suppose this is another example of adding information to improve measurement performance. In this case the information is the analyzer noise power, and the resulting improvement is in both sensitivity and accuracy for small signals.

The discussion so far has focused mainly on sensitivity and its consequences. It’s worth noting that this sensitivity enhancement can be traded away for other benefits such as measurement speed. For example, NFE may allow a wider RBW to be used for a given measurement, resulting in significantly faster sweep rates.

For more detail on the applications and measurement improvements related to NFE, see the application note Using Noise Floor Extension in the PXA Signal Analyzer or see the NFE option page for the MXA.

 

 

Share
Tagged with: , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Teach Yourself Electronics!

  Self-instruction from the pre-Internet era stands the test of time

I already spend plenty of time on the Web, so a recent urge to refresh my knowledge of vacuum tubes took me back to the place where I first learned how they worked: a “TutorText” book.

Yes, a real book and not a Web page or a YouTube video, helpful though each may be.

Many years ago—maybe in middle school—a TutorText guided me through the basics of electronics, including an introduction to vacuum tubes. I had fond memories of the book and its unique approach and, not being in a hurry, I found a copy online at a used book store in Michigan. A week later I held in my hands the same title that had been on a shelf in our library so long ago. For once, something from my distant past was exactly as I remembered it!

The book was Introduction to Electronics by Hughes & Pipe, published in 1961.

The TutorText concept is simple, though implementing it so effectively must have been a lot of work. The book opens with basic subject information for the reader, followed by a multiple-choice question on that information. Each possible answer directs you to a different page. Correct answers lead to additional explanations and instruction, followed by the next multiple-choice question. Incorrect answers lead to pages where likely errors are explained and the reader may be—not so gently!—admonished to pay attention and try again.

Hughes and Pipe were not just fooling around. The text reveals an instructional attitude that’s a little more direct than what is in vogue today. For example, if you ignore the instructions and turn from page 1 to page 2 you are met with:

“You did not follow instructions. . .”

“Now there was no way you could have arrived at this page if you had followed instructions.”

I found the approach to be refreshing, and the wrong-answer pages to be the most interesting. Here’s one:

TutorText p379

And here’s another:

TutorText p355

The authors clearly worked hard to make the style personal and motivational—to the extent I almost fear a rap on my knuckles when I turn to the wrong page due to sloppy reasoning or inattention.

For me, the effect of digging back into this book is a joyful recharging of my energy for learning—and maybe that’s another useful lesson from the book. Sometimes learning can be enhanced by changing the method or the vehicle. If you’ve been mired in online articles and video clips, consider getting up out of your chair and bugging an expert. Or go find a book. Even an old one from 1961.

Introduction to Electronics is one of a number of TutorTexts. Others will teach you about the arithmetic of computers, introductory algebra, basic computer programming (c. 1962), how to use a slide rule, advanced bidding, and the meaning of modern poetry. They’re sometimes grouped under the heading of “gamebooks” and are available at used book stores, both online and brick-and-mortar.

If you’re interested in a more current—and challenging—topic, you could do worse than a book I had a small role in writing: LTE and the Evolution to 4G Wireless. None of its passages will leave you fearing a rap on the knuckles.

Share
Tagged with: , , , , , , , , , , ,
Posted in Aero/Def, History, Off-topic (almost)

Time Is Still On Your Side: An Overlooked Averaging Type, Part 2

  Improve SNR 10 to 20 dB? Absolutely!

Last time, I introduced the technique of time averaging, also known as vector or coherent averaging. When available in a signal analyzer and used on suitable signals, the SNR improvements are impressive, as shown below.

Compare the results of trace averaging (top left) and time averaging (top right) on the same signal with the same amplitude scale of 10 dB/div: Trace averaging reduces the variance of the results, while time averaging substantially reduces noise in the measurement. The pulsed signal’s RF envelope is shown in the lower trace, along with time-gate markers and the level of the magnitude trigger (dashed white line).

Compare the results of trace averaging (top left) and time averaging (top right) on the same signal with the same amplitude scale of 10 dB/div: Trace averaging reduces the variance of the results, while time averaging substantially reduces noise in the measurement. The pulsed signal’s RF envelope is shown in the lower trace, along with time-gate markers and the level of the magnitude trigger (dashed white line).

In this example a pulsed signal repeats with consistent phase and the IF magnitude trigger of the 89600 VSA software is used to align successive acquisitions of the signal for 100 time averages. The average noise level is reduced by about 20 dB, while the measured signal level is unaffected. Note, however, that the variance is higher in the time-averaged result.

The bottom trace in the figure shows the averaged time record from which the results are calculated. In this example, time gating is used to isolate the spectrum measurement on the pulse only.

Most measurement types can be calculated from the averaged time record, including spectrum, phase, delay and analog demodulation. All these measurements will benefit from the lower effective noise or better SNR of the time averaging.

As described last time, this technique has two big drawbacks: the need for a signal that repeats in a coherent fashion and some way to trigger the averaging process. Fortunately, repeating signals are common in wireless and aerospace/defense applications, especially those that use signals generated from arbitrary waveform generators or similar processes.

In addition, it is not necessary for the entire signal to repeat. Time averaging can be combined with gated or other time-selective measurements focused on the repeating portion of a signal that otherwise changes in some way from cycle to cycle. For example, the preambles of many digitally modulated signals repeat consistently even when payload data varies from frame to frame.

As for triggering requirements, Agilent signal analyzers and VSA software offer several solutions. The IF magnitude trigger, mentioned previously, is often a good choice and can be used with signal captures or recordings. External triggers are also available in many applications, especially when signal generators are used to generate framed signals or when the trigger associated with the repeat of an arbitrary waveform is available.

Another trigger source is the periodic timer or frame trigger function available in some Agilent signal analyzers. This function offers a high degree of precision and flexibility in generating periodic triggers from the analyzer itself rather than the input signal. More on this in a future post.

Lastly, I should mention that time averaging works on CW signals as well as those that are time-varying. Of course, with CW signals one can also reduce RBW to improve sensitivity. With time-varying signals you sometimes can’t do that because narrow RBWs can filter out some of the signal you’re trying to measure.

 

Share
Tagged with: , , , , , ,
Posted in Aero/Def, Measurement techniques, Measurement theory, Signal analysis, Signal generation, Wireless

Time Is On Your Side: An Overlooked Averaging Type, Part 1

  Get a quick SNR improvement of 10 to 20 dB—when the conditions are right

The topic of averaging comes up a lot in RF measurements and in this blog. I suppose it’s an unavoidable consequence of the fact that the universe is noisy and we engineers are striving instead for a quiet certainty.

Most discussions of averaging focus on smoothing data to reduce the effect of noise and therefore the variance of measurement results. As described previously, it’s important to reduce variance efficiently and without distorting the results.

However, as explained last time, the right type of averaging can modestly improve the signal/noise ratio for certain measurements instead of simply smoothing them. This is a very fortunate situation, albeit limited to CW signals near noise.

Another fortunate situation, and one with a bigger benefit, revolves around signals that repeat consistently. Such signals are common in communications, navigation and imaging, all applications in which improved dynamic range—not just reduced variance—is valuable.

These repeating signals represent additional information, giving us the chance to go beyond the well-known good/fast/cheap tradeoffs.

Information is the third element in this useful triad of measurement tradeoffs. The diagram symbolizes how measurements may be improved by treating information as an input to improve the measurement process and not merely an output.

Information is the third element in this useful triad of measurement tradeoffs. The diagram symbolizes how measurements may be improved by treating information as an input to improve the measurement process and not merely an output.

Just as it’s easy—for me, at least—to underestimate the magnitude of noise, it’s also easy to underestimate the additional information represented by repeating signals. So how do we harness this information to make better measurements?

The answer is time averaging, also referred to as synchronous or vector averaging. As the names imply, this type of averaging operates in the time domain and accounts for magnitude and phase or I and Q, as shown in the figure below.

The time averaging process is shown graphically for repeated samples of a single point in a repeating signal. Vector averaging of the samples quickly converges on a better measurement (red dot) of the signal’s actual value.

The time averaging process is shown graphically for repeated samples of a single point in a repeating signal. Vector averaging of the samples quickly converges on a better measurement (red dot) of the signal’s actual value.

The concept is straightforward: A repeating signal is sampled on a time scale precisely synchronized with its repetition interval, and the analyzer’s RF downconversion is phase-stable with respect to the signal. Samples from each repetition of the signal are averaged in I/Q or magnitude/phase to form a time record for any type of analysis, including time, frequency and demodulation. Noise is uncorrelated with the signal and is averaged out, as shown above.

The fundamental thing to understand about time averaging is the noise is not smoothed and the variance may not be reduced. Instead, most of the noise is effectively removed. That’s the magic of adding information to the measurement!

When seen in operation, it does look a little like magic. In many cases, hundreds of averages can be performed in a second or two, and the measurement noise floor plunges by 10 to 20 dB!

The other part of the magic is that the improvement in dynamic range applies to all kinds of measurements in the time, frequency and modulation domains.

This averaging type is available standalone on Agilent’s X-Series signal analyzers and the PSA spectrum analyzer, and on other measurement platforms through the 89600 VSA software.

In the next post I’ll illustrate the benefits of time averaging using an example or two, and discuss some practical implementations and limitations. This averaging won’t suit every situation, but it’s a powerful way to make the best use of in-hand information and produce better measurements.

 

 

Share
Tagged with: , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Signal analysis, Wireless

The Elegance of the Classic Spurious Measurement

  Simplicity and synergy—sometimes physics breaks your way

Elegance, like beauty, may be in the eye of the beholder. And I doubt that non-engineers would find the classic spurious measurement setup to be elegant. Nonetheless, I think the traditional approach to measuring CW spurs near noise does qualify as elegant and it’s impressively simple and effective.

One of the main tasks of spectrum analysis has always been to find and measure spurious signals. It can be a difficult job when spurs are close to the noise floor, a situation that causes problems with both measurement accuracy and speed. I summarized the problem graphically in Oh, the noise! Noise! Noise! Noise and described how measurements of both the signal and the signal/noise ratio (SNR) would be affected when SNR was small.

Fortunately, the laws of physics sometimes break our way, and this is one of those times. The practical averaging technique available in early spectrum analyzers had two significant benefits: It both accurately represented the CW spurs that engineers were searching for and—wonders!—it reduced the apparent magnitude of the noise that was spoiling the measurement.

How can an averaging technique improve accuracy and effective SNR? The traditional averaging technique for spectrum analyzers was a narrow video bandwidth (VBW) filter, smoothing the video signal that represented the magnitude of the detected signal. Because the video signal was usually log-scaled, the VBW filtering performed an average of the log of the signal magnitude. In The Average of the Log is Not the Log of the Average I described the two approaches and noted that VBW filtering was accurate for CW signals but not for other signals such as noise. In Log Scaling: Useful But Sometimes Tricky I explained the downward bias that comes from video averaging noise and other time-varying signals.

It all comes together in the classic—and elegant—spurious measurement setup where the averaging of a VBW no wider than one-third of the RBW accurately represents the magnitude of CW spurs and reduces apparent noise power by about 2.5 dB, as shown in the figure below.

 

The results of power averaging (center) and log averaging (right) on a signal with 1 dB SNR are compared to a no-noise measurement (left). Log averaging substantially reduces the effect of the noise and dramatically improves the accuracy of the measurement.

The results of power averaging (center) and log averaging (right) on a signal with 1 dB SNR are compared to a no-noise measurement (left). Log averaging substantially reduces the effect of the noise and dramatically improves the accuracy of the measurement.

For the 1 dB SNR case shown in the figure, the decrease in measurement error is dramatic, falling from 2.54 dB to 0.63 dB!

It’s a fortunate coincidence for the common and demanding measurement situation in which small signals must be measured near noise: The simple, easily-implemented averaging technique is also the one that better separates the signal from noise and improves measurement accuracy.

Of course, other factors and tradeoffs are always involved. The VBW averaging technique is not accurate for non-CW signals. Also, narrow VBW filters may reduce measurement speed significantly because sweep rates are related to the narrower of VBW and RBW settings.

For a more thorough discussion of this topic, including a quantitative analysis of the errors, see page 24 of Application Note 1303, Agilent Spectrum and Signal Analyzer Measurements and Noise.

 

 

Share
Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Measurement theory, Signal analysis, Wireless