Biographies Characteristics Analysis

Spectral density. Basic digital communication terminology

In statistical radio engineering and physics, when studying deterministic signals and random processes, their spectral representation in the form of spectral density, which is based on the Fourier transform, is widely used.

If the process has a finite energy and is quadratically integrable (and this is a non-stationary process), then for one implementation of the process, the Fourier transform can be defined as a random complex function of frequency:

X (f) = ∫ − ∞ ∞ x (t) e − i 2 π f t d t . (\displaystyle X(f)=\int \limits _(-\infty )^(\infty )x(t)e^(-i2\pi ft)dt.) (1)

However, it turns out to be almost useless for describing the ensemble. The way out of this situation is to discard some parameters of the spectrum, namely the spectrum of phases, and construct a function that characterizes the distribution of the energy of the process along the frequency axis. Then, according to the Parseval theorem, the energy

E x = ∫ − ∞ ∞ | x (t) | 2 d t = ∫ − ∞ ∞ | X(f) | 2d f . (\displaystyle E_(x)=\int \limits _(-\infty )^(\infty )|x(t)|^(2)dt=\int \limits _(-\infty )^(\infty ) |X(f)|^(2)df.) (2)

Function S x (f) = | X(f) | 2 (\displaystyle S_(x)(f)=|X(f)|^(2)) thus characterizes the distribution of realization energy along the frequency axis and is called the spectral density of realization. By averaging this function over all realizations, one can obtain the spectral density of the process.

Let us now turn to a broadly stationary centered stochastic process x (t) (\displaystyle x(t)), whose realizations have infinite energy with probability 1 and, therefore, do not have a Fourier transform. The spectral density of the power of such a process can be found on the basis of the Wiener-Khinchin theorem as the Fourier transform of the correlation function:

S x (f) = ∫ − ∞ ∞ k x (τ) e − i 2 π f τ d τ . (\displaystyle S_(x)(f)=\int \limits _(-\infty )^(\infty )k_(x)(\tau)e^(-i2\pi f\tau )d\tau .) (3)

If there is a direct transformation, then there is also an inverse Fourier transform, which determines from the known k x (τ) (\displaystyle k_(x)(\tau)):

k x (τ) = ∫ − ∞ ∞ S x (f) e i 2 π f τ d f . (\displaystyle k_(x)(\tau)=\int \limits _(-\infty )^(\infty )S_(x)(f)e^(i2\pi f\tau )df.) (4)

If we assume in formulas (3) and (4), respectively, f = 0 (\displaystyle f=0) and τ = 0 (\displaystyle \tau =0), we have

S x (0) = ∫ − ∞ ∞ k x (τ) d τ , (\displaystyle S_(x)(0)=\int \limits _(-\infty )^(\infty )k_(x)(\tau )d\tau ,) (5)
σ x 2 = k x (0) = ∫ − ∞ ∞ S x (f) d f . (\displaystyle \sigma _(x)^(2)=k_(x)(0)=\int \limits _(-\infty )^(\infty )S_(x)(f)df.) (6)

Formula (6), taking into account (2), shows that the variance determines the total energy of a stationary random process, which is equal to the area under the spectral density curve. Dimensional value S x (f) d f (\displaystyle S_(x)(f)df) can be interpreted as the fraction of energy concentrated in a small frequency range from f − d f / 2 (\displaystyle f-df/2) before f + d f / 2 (\displaystyle f+df/2). If understood by x (t) (\displaystyle x(t)) random (fluctuation) current or voltage, then the value S x (f) (\displaystyle S_(x)(f)) will have the dimension of energy [V 2 / Hz] = [V 2 s]. So S x (f) (\displaystyle S_(x)(f)) sometimes called energy spectrum. You can often find another interpretation in the literature: σ x 2 (\displaystyle \sigma _(x)^(2))- is considered as the average power released by the current or voltage at a resistance of 1 ohm. At the same time, the value S x (f) (\displaystyle S_(x)(f)) called power spectrum random process.

Encyclopedic YouTube

    1 / 3

    Spectrum and Spectral Density

    Spectral density of a rectangular pulse

    Spectral density of a triangular pulse

When studying automatic control systems, it is convenient to use one more characteristic of a stationary random process, called the spectral density. In many cases, especially when studying the transformation of stationary random processes by linear control systems, the spectral density turns out to be a more convenient characteristic than the correlation function. The spectral density of a random process is defined as the Fourier transform of the correlation function , i.e.

If we use the Euler formula, then (9.52) can be represented as

Since the function is odd, then in the last expression the second integral is equal to zero. Considering that an even function, we get

Since it follows from (9.53) that

Thus the spectral density is a real and even function of the frequency o). Therefore, on the graph, the spectral density is always symmetrical about the y-axis.

If the spectral density is known, then by the formula of the inverse Fourier transform, one can find the correlation function corresponding to it:

Using (9.55) and (9.38), one can establish an important relationship between the variance and the spectral density of a random process:

The term "spectral density" owes its origin to the theory of electrical oscillations. The physical meaning of the spectral density can be explained as follows.

Let - the voltage applied to the ohmic resistance 1 Ohm, then the average power dissipated on this resistance over time is equal to

If we increase the observation interval to infinite limits and use (9.30), (9.38) and (9.55), then we can write the formula for the average power as follows:

Equation (9.57) shows that the average signal power can be represented as an infinite sum of infinitesimal terms , which applies to all frequencies from 0 to

Each elementary term of this sum plays the role of a power corresponding to an infinitely small part of the spectrum, enclosed within the range from to. Each elementary power is proportional to the value of the function for a given frequency. Therefore, the physical meaning of the spectral density is that it characterizes the signal power distribution over the frequency spectrum.

The spectral density can be found experimentally through the average value of the squared amplitude of the harmonics of the realization of a random process. Instruments used for this purpose and consisting of a spectrum analyzer and a calculator of the average value of the squared amplitude of the harmonics are called spectrometers. It is more difficult to experimentally find the spectral density than the correlation function, therefore, in practice, most often the spectral density is calculated using the known correlation function using formula (9.52) or (9.53).

The mutual spectral density of two stationary random processes is defined as the Fourier transform of the cross-correlation function, i.e.

Using the cross spectral density, applying the inverse Fourier transform to (9.58), we can find an expression for the cross correlation function:

The mutual spectral density is a measure of the statistical relationship between two stationary random processes: If the processes are uncorrelated and have zero mean values, then the mutual spectral density is zero, i.e.

In contrast to the spectral density, the mutual spectral density is not an even function of o and is not a real, but a complex function.

consider some properties of spectral densities

1 The spectral density of a pure random process, or white noise, is constant over the entire frequency range (see Fig. 9.5, d):

Indeed, substituting expression (9.47) for the correlation function of white noise into (9.52), we obtain

The constancy of the spectral density of white noise over the entire infinite frequency range, obtained in the last expression, means that the energy of white noise is distributed uniformly over the entire spectrum, and the total energy of the process is equal to infinity. This indicates the physical unrealizability of a random process such as white noise. White noise is a mathematical idealization of a real process. In reality, the frequency spectrum drops off at very high frequencies (as shown by the dotted line in Figure 9.5d). If, however, these frequencies are so large that they do not play a role when considering a particular device (because they lie outside the frequency band passed by this device), then the idealization of the signal in the form of white noise simplifies the consideration and is therefore quite appropriate.

The origin of the term "white noise" is explained by the analogy of such a process with white light, which has the same intensity of all components, and by the fact that random processes such as white noise were first identified in the study of thermal fluctuation noise in radio engineering devices.

2. The spectral density of a constant signal is a -function located at the origin (see Fig. 9.5, a), i.e.

To prove this, let us assume that the spectral density has the form (9.62) and the andem according to (9.55) the correlation function corresponding to it. As

then when we get

This (according to property 5 of the correlation functions) means that the signal corresponding to the spectral density defined by (9.62) is a constant signal equal to

The fact that the spectral density is a -function of at means that all the power of the constant signal is concentrated at zero frequency, which is to be expected.

3. The spectral density of a periodic signal is two -functions located symmetrically with respect to the origin of the coordinates at (see Fig. 9.5, e), i.e.

To prove this, let us assume that the spectral density has the form (9.63) and find from (9.55) the correlation function corresponding to it:

This (according to property 6 of the correlation functions) means that the signal corresponding to the spectral density determined by (9.63) is a periodic signal equal to

The fact that the spectral density is two -functions located at means that the entire power of the periodic signal is concentrated at two frequencies: If we consider the spectral density only in the region of positive frequencies, we get,

that all the power of the periodic signal will be concentrated at one frequency.

4. Based on the above, the spectral density of the time function expanded into a Fourier series has the form

This spectral density corresponds to a line spectrum (Fig. 9.9) with -functions located at positive and negative harmonic frequencies. On fig. 9.9 -functions are conditionally depicted so that their heights are shown as proportional coefficients at a unit -function, i.e., to the values ​​\u200b\u200band

which completely coincides with the correlation function determined by (9.45).

From fig. 9.5, b, c it can be seen that the wider the graph of the spectral density, the narrower the graph of the corresponding correlation function and vice versa. This corresponds to the physical essence of the process: the wider the spectral density graph, i.e., the higher the frequencies represented in the spectral density, the higher the degree of variability of the random process and the same graphs of the correlation function. In other words, the relationship between the form of the spectral density and the form of the time function is inverse compared to the relationship between the correlation function and the form of the time function. This is especially pronounced when considering a constant signal and white noise. In the first case, the correlation function has the form of a horizontal straight line, and the spectral density has the form of a function (see Fig. 9.5, a). In the second case (see Fig. 9.5, d) the reverse picture takes place.

6. The spectral density of a random process, on which periodic components are superimposed, contains a continuous part and individual -functions corresponding to the frequencies of the periodic components.

Individual peaks in the spectral density plot indicate that the random process is mixed with hidden periodic components that may not be revealed at first glance at the individual records of the process. If, for example, one periodic signal with a frequency is superimposed on a random process, then the graph; The spectral density has the form shown in Fig. 9.10,

Sometimes a normalized

the spectral density which is the Fourier image of the normalized correlation function (9.48):

The normalized spectral density has the dimension of time.

The value that characterizes the distribution of energy over the signal spectrum and is called the energy spectral density exists only for signals whose energy over an infinite time interval is finite and, therefore, the Fourier transform is applicable to them.

For signals that do not decay in time, the energy is infinitely large and the integral (1.54) diverges. Setting the amplitude spectrum is not possible. However, the average power Рav, determined by the relation

turns out to be the end. Therefore, the broader concept of "power spectral density" is used. We define it as the derivative of the average signal power with respect to frequency and denote it as Ck(u):

The index k emphasizes that here we consider the power spectral density as a characteristic of the deterministic function u(t) that describes the realization of the signal.

This characteristic of the signal is less meaningful than the spectral density of the amplitudes, since it is devoid of phase information [see. (1.38)]. Therefore, it is impossible to uniquely restore the original realization of the signal from it. However, the absence of phase information makes it possible to apply this concept to signals in which the phase is not defined.

To establish a connection between the spectral density Ck(w) and the amplitude spectrum, we use the signal u(t), which exists on a limited time interval (-T<. t

where is the power spectral density of a time-limited signal.

It will be shown below (see § 1.11) that by averaging this characteristic over a set of realizations, one can obtain the power spectral density for a large class of random processes.

Deterministic Signal Autocorrelation Function

There are now two characteristics in the frequency domain: the spectral response and the power spectral density. The spectral characteristic containing complete information about the signal u(t) corresponds to the Fourier transform in the form of a time function. Let us find out what corresponds in the time domain to the power spectral density devoid of phase information.

It should be assumed that the same power spectral density corresponds to a set of time functions that differ in phase. Soviet scientist L.Ya. Khinchin and the American scientist N. Wiener almost simultaneously found the inverse Fourier transform of the power spectral density:


The generalized temporal function r(), which does not contain phase information, will be called the temporal autocorrelation function. It shows the degree of connection between the values ​​of the function u(t), separated by a time interval, and can be obtained from statistical theory by developing the concept of the correlation coefficient. Note that in the time correlation function, averaging is carried out over time within one realization of a sufficiently long duration.

Let the signal s(t) is given as a non-periodic function, and it exists only on the interval ( t 1 ,t 2) (example - single pulse). Let's choose an arbitrary period of time T, which includes the interval ( t 1 ,t 2) (see Fig.1).

Let us denote the periodic signal obtained from s(t), as s T(t). Then we can write the Fourier series for it

where

Substitute the expression for into the series:

To get to the function s(t) follows in the expression s T(t) let the period go to infinity. In this case, the number of harmonic components with frequencies w =n 2p /T will be infinitely large, the distance between them will tend to zero (to an infinitely small value: , the amplitudes of the components will also be infinitesimal. Therefore, it is no longer possible to talk about the spectrum of such a signal, since spectrum becomes solid.

When passing to the limit in the case T=> , we have:

Thus, in the limit we get

The inner integral is a function of frequency. It is called the spectral density of the signal, or the frequency response of the signal and denoted,

the direct (*) and inverse (**) Fourier transforms are collectively referred to as a pair of Fourier transforms. The spectral density module determines the amplitude-frequency characteristic (AFC) of the signal, and its argument called the phase-frequency characteristic (PFC) of the signal. The frequency response of the signal is an even function, and the phase response is odd.

The meaning of the module S(w) is defined as the amplitude of a signal (current or voltage) per 1 Hz in an infinitely narrow frequency band that includes the frequency of interest w. Its dimension is [signal/frequency].

9. Properties of the Fourier transform. Linearity properties, time scale changes, others. Theorem on the spectrum of the derivative. Theorem on the spectrum of the integral.

10. Discrete Fourier Transform. Radio interference. Interference classification.

Discrete Fourier Transform can be obtained directly from the integral transformation of the discretizations of the arguments (t k = kDt, f n = nDf):

S(f) = s(t) exp(-j2pft) dt, S(f n) = Dt s(t k) exp(-j2pf n kDt), (6.1.1)

s(t) = S(f) exp(j2pft) df, s(t k) = Df S(f n) exp(j2pnDft k). (6.1.2)

Recall that discretization of a function in time leads to periodization of its spectrum, and discretization of the spectrum in frequency leads to periodization of the function. It should also not be forgotten that the values ​​(6.1.1) of the number series S(f n) are discretizations of the continuous function S "(f) of the spectrum of the discrete function s(t k), as well as the values ​​(6.1.2) of the number series s(t k) are a discretization of a continuous function s "(t), and when these continuous functions S" (f) and s "(t) are restored from their discrete samples, the correspondence S" (f) = S (f) and s "(t) = s (t) is guaranteed only if the Kotelnikov-Shannon theorem is satisfied.

For discrete transformations s(kDt) Û S(nDf), both the function and its spectrum are discrete and periodic, and the numerical arrays of their representation correspond to the assignment on the main periods T = NDt (from 0 to T or from -T/2 to T/ 2), and 2f N = NDf (from -f N to f N), where N is the number of readings, while:

Df = 1/T = 1/(NDt), Dt = 1/2f N = 1/(NDf), DtDf = 1/N, N = 2Tf N . (6.1.3)

Relations (6.1.3) are the conditions for informational equivalence of dynamic and frequency forms of representation of discrete signals. In other words: the number of readings of the function and its spectrum must be the same. But each sample of the complex spectrum is represented by two real numbers and, accordingly, the number of samples of the complex spectrum is 2 times more than the samples of the function? This is true. However, the representation of the spectrum in complex form is nothing more than a convenient mathematical representation of the spectral function, the real readings of which are formed by adding two conjugate complex readings, and complete information about the spectrum of the function in complex form is contained in only one of its half - readings of the real and imaginary parts of complex numbers in frequency interval from 0 to f N , because information of the second half of the range from 0 to -f N is associated with the first half and does not carry any additional information.

In the case of discrete representation of signals, the argument t k is usually indicated by sample numbers k (by default, Dt = 1, k = 0.1,…N-1), and Fourier transforms are performed by the argument n (frequency step number) on the main periods. For N values ​​that are multiples of 2:

S(f n) º S n = s k exp(-j2pkn/N), n = -N/2,…,0,…,N/2. (6.1.4)

s(t k) º s k = (1/N) S n exp(j2pkn/N), k = 0,1,…,N-1. (6.1.5)

The main period of the spectrum in (6.1.4) for cyclic frequencies from -0.5 to 0.5, for angular frequencies from -p to p. For an odd value of N, the boundaries of the main period in frequency (values ​​±f N) are half the frequency step behind the samples ±(N/2) and, accordingly, the upper summation limit in (6.1.5) is set equal to N/2.



In computing operations on a computer, in order to exclude negative frequency arguments (negative values ​​of the numbers n) and use identical algorithms for the direct and inverse Fourier transforms, the main period of the spectrum is usually taken in the range from 0 to 2f N (0 £ n £ N), and the summation in (6.1 .5) is produced respectively from 0 to N-1. In this case, it should be taken into account that the complex conjugate samples S n * of the interval (-N,0) of the two-sided spectrum in the interval 0-2f N correspond to the samples S N+1- n (i.e., the conjugate samples in the interval 0-2f N are the samples S n and S N+1- n).

Example: On the interval T= , N=100, a discrete signal s(k) = d(k-i) is given - a rectangular pulse with single values ​​at points k from 3 to 8. The signal shape and the modulus of its spectrum in the main frequency range, calculated by the formula S (n) = s(k)×exp(-j2pkn/100) numbered by n from -50 to +50 with a frequency step, respectively, Dw=2p/100, are shown in fig. 6.1.1.

Rice. 6.1.1. Discrete signal and modulus of its spectrum.

On fig. 6.1.2 shows the envelope values ​​of another form of representation of the main range of the spectrum. Regardless of the form of representation, the spectrum is periodic, which is easy to see if the spectrum values ​​are calculated for a larger interval of the argument n while maintaining the same frequency step, as shown in Fig. 6.1.3 for the envelope of spectrum values.

Rice. 6.1.2. Spectrum module. Rice. 6.1.3. Spectrum module.

On fig. 6.1.4. the inverse Fourier transform for the discrete spectrum is shown, performed by the formula s "(k) \u003d (1/100) S (n) × exp (j2pkn / 100), which shows the periodization of the original function s (k), but the main period k \u003d ( 0.99) of this function completely coincides with the original signal s(k).

Rice. 6.1.4. Inverse Fourier Transform.

Transformations (6.1.4-6.1.5) are called Discrete Fourier Transforms (DFTs). For the DFT, in principle, all properties of the integral Fourier transforms are valid, however, one should take into account the periodicity of discrete functions and spectra. The product of the spectra of two discrete functions (when performing any operations when processing signals in the frequency representation, such as filtering signals directly in the frequency form) will correspond to the convolution of periodized functions in the time representation (and vice versa). Such a convolution is called cyclic (see Section 6.4) and its results at the end sections of information intervals can differ significantly from the convolution of finite discrete functions (linear convolution).

It can be seen from the DFT expressions that to calculate each harmonic, N operations of complex multiplication and addition are needed and, accordingly, N 2 operations for the complete execution of the DFT. With large volumes of data arrays, this can lead to significant time costs. The acceleration of calculations is achieved by using the fast Fourier transform.

Interference

Interference is usually called extraneous electrical disturbances that are superimposed on the transmitted signal and make it difficult to receive it. With a high intensity of interference, reception becomes almost impossible.

Interference classification:

a) interference from neighboring radio transmitters (stations);

b) interference from industrial installations;

c) atmospheric interference (thunderstorms, precipitation);

d) interference caused by the passage of electromagnetic waves through the layers of the atmosphere: troposphere, ionosphere;

e) thermal and shot noise in the elements of radio circuits, due to the thermal motion of electrons.

Mathematically, the signal at the receiver input can be represented either as the sum of the transmitted signal and interference, and then the interference is called additive, or just noise, or in the form of a product of the transmitted signal and interference, and then such interference is called multiplicative. This interference leads to significant changes in the signal intensity at the receiver input and explains such phenomena as fading.

The presence of interference makes it difficult to receive signals at a high intensity of interference, signal recognition may become almost impossible. The ability of a system to resist interference is called noise immunity.

External natural active interference is the noise resulting from the radio emission of the earth's surface and space objects, the operation of other electronic means. A set of measures aimed at reducing the influence of mutual interference of RES is called electromagnetic compatibility. This complex includes both technical measures for improving radio equipment, the choice of a signal shape and a method for processing it, and organizational measures: frequency regulation, spacing of RES in space, normalization of the level of out-of-band and spurious emissions, etc.

11. Discretization of continuous signals. Theorem of Kotelnikov (counts). The concept of the Nyquist frequency. Concept of discretization interval.

For completeness, we briefly discuss below the concepts of spectrum and spectral density. The application of these important concepts is described in more detail in . We do not use them for time series analysis in this book, so this section can be omitted on first reading.

Sample spectrum. When determining the periodogram (2.2.5), it is assumed that the frequencies are harmonics of the fundamental frequency. By introducing the spectrum, we relax this assumption and allow the frequency to vary continuously in the range of 0-0.5 Hz. The definition of a periodogram can be modified as follows:

, , (2.2.7)

where is called the sample spectrum. Like a periodogram, it can be used to detect and estimate the amplitudes of a sinusoidal component of an unknown frequency hidden in noise, and indeed it is even more convenient, unless the frequency is known to be harmonically related to the long series, i.e. . Moreover, it is the starting point for the theory of spectral analysis, using the important relationship given in Appendix A2.1. This relationship establishes a connection between the sampling analysis of the spectrum and estimates of the autocovariance function:

. (2.2.8)

So the sample spectrum is the cosine Fourier transform of the sample autocovariance function.

Range. The periodogram and the sample spectrum are convenient concepts for analyzing time series formed by a mixture of sinusoids and cosines with constant frequencies hidden in noise. However, stationary time series of the type described in Sec. 2.1 are characterized by random changes in frequency, amplitude and phase. For such series, the sample spectrum fluctuates strongly and does not allow any reasonable interpretation.

Suppose, however, that the sample spectrum has been computed for a time series of observations that is a realization of a stationary normal process. As mentioned above, such a process does not have any deterministic sinusoidal or cosine components, but we can formally perform a Fourier analysis and obtain values ​​of , for any frequency . If iterations of observations are generated by a stochastic process, we can collect a population of values ​​and . Then we can find the average over iterations of length , namely

. (2.2.9)

For large values, it can be shown (see, for example, ) that the average value of autocovariance in repeated realizations tends to the theoretical autocovariance, i.e.

Passing to the limit in (2.2.9) for , we define the power spectrum as

, . (2.2.10)

Note that since

then for the spectrum to converge, it must decrease with growth so rapidly that the series (2.2.11) converges. Since the power spectrum is the cosine Fourier transform of the autocovariance function, knowing the autocovariance function is mathematically equivalent to knowing the power spectrum and vice versa. In what follows, we will refer to the power spectrum simply as the spectrum.

Integrating (2.2.10) in the range from 0 to 1/2 , we find the variance of the process

. (2.2.12)

Therefore, just as the periodogram shows how the dispersion (2.2.6) of a series consisting of a mixture of sinusoids and cosines is distributed among various harmonic components, the spectrum shows how the dispersion of a stochastic process is distributed over a continuous frequency range. Can be interpreted as an approximate value of the dispersion of the process in the frequency range from to .

Normalized Spectrum. Sometimes it is more convenient to define the spectrum (2.2.10) using autocorrelations rather than autocovariances. Resulting function

, (2.2.13). However, it can be shown (see ) that the sample spectrum of a stationary time series fluctuates strongly around the theoretical spectrum. The intuitive explanation for this fact is that the sampling spectrum corresponds to using too narrow an interval in the frequency domain. This is analogous to using too narrow a grouping interval for a histogram when estimating a normal probability distribution using a modified, or smoothed, estimate

, (2.2.14)

where - specially selected weights, called the correlation window, you can increase the "bandwidth" of the estimate and get a smoothed spectrum estimate.

On fig. 2.8 shows a sample evaluation of the spectrum of data on batches of the product. It can be seen that the dispersion of the series is concentrated mainly at high frequencies. This is due to the rapid oscillations of the initial series shown in Fig. 2.1.