Biographies Characteristics Analysis

Spectral signal density. Autocorrelation of random processes, stationary in the broad sense

Mathematical models of many signals widely used in radio engineering do not satisfy the absolute integrability condition, so the Fourier transform method in its usual form is not applicable to them. However, as was pointed out, one can speak about the spectral densities of such signals, if we assume that these densities are described by generalized functions.

Generalized Rayleigh formula. Let us prove an important auxiliary statement concerning the spectral properties of signals.

Let two signals in the general case be complex-valued, defined by their inverse Fourier transforms:

Let's find the scalar product of these signals, expressing one of them, for example, through its spectral density:

Here, the inner integral is obviously the spectral density of the signal. So

The resulting relation is a generalized Rayleigh formula. An easily remembered interpretation of this formula is as follows: the scalar product of two signals, up to a coefficient, is proportional to the scalar product of their spectral densities.

Generalization of the concept of spectral density.

We assume that the signal is an absolutely integrable function. Then its Fourier transform is the usual classical frequency function. Let, along with this, the signal does not satisfy the absolute integrability condition and the Fourier transform does not exist in the usual classical sense. However, one can extend the concept of spectral density by assuming that it is a generalized function in the sense established in § 1.2. To do this, in accordance with the generalized Rayleigh formula, it suffices to assume that is a functional that, acting on a known function , gives the following result:

It is advisable to consider methods for calculating the spectra of non-integrable signals using specific examples.

Spectral density of a time-constant signal. The simplest non-integrable signal is a constant value and . Suppose that is an arbitrary real absolutely integrable signal with a known spectral density

Expanding formula (2.43), we have

But, as is easy to see,

Hence, based on the filtering property of the delta function, we conclude that equality (2.43) is possible only under the condition that

The physical meaning of the result obtained is clear - a time-invariant signal has a spectral component only at zero frequency.

Spectral density of a complex exponential signal.

Let be a complex exponential signal with a given real frequency. This signal is not absolutely integrable, since the function s(t) does not tend to any limit at . The Fourier transform of this signal, considered in a generalized sense, must satisfy the relation

Hence the desired spectral density S (co), is expressed as follows:

Note the following:

1. The spectral density of a complex exponential signal is equal to zero everywhere, except for the point where it has a delta singularity.

2. The spectrum of this signal is asymmetric about the point and is concentrated in the region of either positive or negative frequencies.

Spectral density of harmonic oscillations. Let According to the Euler formula

The spectrum of the complex exponential signal found above, as well as the linearity property of the Fourier transform, allow us to immediately write the expression for the spectral density of the cosine signal:

The reader can easily check for himself that for a sinusoidal signal, the relation

It should be noted that expression (2.46) is an even, and expression (2.47) is an odd function of frequency.

Spectral density of an arbitrary periodic signal.

Previously, periodic signals were studied by methods of the theory of Fourier series. Now you can expand your understanding of their spectral properties by describing periodic signals using the Fourier transform.

A periodic signal given by its Fourier series in complex form. Based on formula (2.45), taking into account the linearity property of the Fourier transform, we immediately obtain the expression for the spectral density of such a signal:

The corresponding spectral density graph in its configuration repeats the usual spectral diagram of a periodic signal. The graph is formed by delta pulses in the frequency domain, which are located at points with coordinates

Spectral density of the switching function.

Let us calculate the spectral density of the inclusion function , which, for simplicity, we define at all points, except for the point t = 0 [cf. with (1.2)]:

First of all, we note that the switch-on function is obtained by passing to the limit from the exponential video pulse:

Therefore, one can try to obtain the spectral density of the inclusion function by passing to the limit as a - 0 in the formula for the spectral density of an exponential oscillation:

A direct transition to the limit, according to which is valid at all frequencies, except for the value of , when more careful consideration is needed.

First of all, we separate the real and imaginary parts in the spectral density of the exponential signal:

It can be verified that

Indeed, the limiting value of this fraction vanishes for any, and at the same time

regardless of the value of a, from which the assertion follows.

So, we have obtained a one-to-one correspondence between the inclusion function and its spectral density:

The delta singularity at indicates that the switch-on function has a constant component equal to 1/2.

Spectral density of a radio pulse.

As is known, a radio pulse is given as a product of some video pulse, which plays the role of an envelope, and a non-integrable harmonic oscillation: .

To find the spectral density of a radio pulse, we assume that a known function is the spectrum of its envelope. The spectrum of a cosine signal with an arbitrary initial phase is obtained by an elementary generalization of formula (2.46):

The spectrum of a radio pulse is a convolution

Taking into account the filtering property of the delta function, we obtain an important result:

Rice. 2.8 illustrates the transformation of the spectrum of a video pulse when it is multiplied by a high-frequency harmonic signal.

Rice. 2.8. Frequency dependences of the modulus of spectral density: a - video pulse; b - radio pulse

It can be seen that the transition from a video pulse to a radio pulse in the spectral approach means the transfer of the video pulse spectrum to the high-frequency region - instead of a single spectral density maximum at , two maxima are observed at at , the absolute values ​​of the maxima are halved.

Note that the graphs in Fig. 2.8 correspond to situations where the frequency significantly exceeds the effective width of the video pulse spectrum (this is the case that is usually implemented in practice). In this case, there is no noticeable "overlap" of the spectra corresponding to positive and negative frequencies. However, it may turn out that the bandwidth of the video pulse spectrum is so large (for a short pulse) that the selected frequency value does not eliminate the “overlapping” effect. As a consequence, the profiles of the spectra of the video pulse and the radio pulse cease to be similar.

Example 2.3. Spectral density of a rectangular radio pulse.

For simplicity, we set the initial phase to be zero and write the mathematical model of the radio pulse in the form

Knowing the spectrum of the corresponding video pulse [see formula (2.20)], based on (2.50) we find the required spectrum:

On fig. 2.9 shows the results of calculating the spectral density using formula (2.51) for two characteristic cases,

In the first case (Fig. 2.9, a), the envelope impulse contains 10 periods of high-frequency filling, the frequency here is high enough to avoid "overlapping". In the second case (Fig. 2.9, b), the radio pulse consists of only one filling period. The superposition of the components that correspond to the regions of positive and negative frequencies leads to a characteristic asymmetry of the petal structure of the graph of the spectral density of the radio pulse.

Rice. 2.9. Graphs of the spectral densities of a radio pulse with a rectangular envelope: a - at ; b - at

In statistical radio engineering and physics, when studying deterministic signals and random processes, their spectral representation in the form of spectral density, which is based on the Fourier transform, is widely used.

If the process has a finite energy and is quadratically integrable (and this is a non-stationary process), then for one implementation of the process, the Fourier transform can be defined as a random complex function of frequency:

X (f) = ∫ − ∞ ∞ x (t) e − i 2 π f t d t . (\displaystyle X(f)=\int \limits _(-\infty )^(\infty )x(t)e^(-i2\pi ft)dt.) (1)

However, it turns out to be almost useless for describing the ensemble. The way out of this situation is to discard some parameters of the spectrum, namely the spectrum of phases, and construct a function that characterizes the distribution of the energy of the process along the frequency axis. Then, according to Parseval's theorem, the energy

E x = ∫ − ∞ ∞ | x (t) | 2 d t = ∫ − ∞ ∞ | X(f) | 2d f . (\displaystyle E_(x)=\int \limits _(-\infty )^(\infty )|x(t)|^(2)dt=\int \limits _(-\infty )^(\infty ) |X(f)|^(2)df.) (2)

Function S x (f) = | X(f) | 2 (\displaystyle S_(x)(f)=|X(f)|^(2)) thus characterizes the distribution of realization energy along the frequency axis and is called the spectral density of realization. By averaging this function over all realizations, one can obtain the spectral density of the process.

Let us now turn to a broadly stationary centered stochastic process x (t) (\displaystyle x(t)), whose realizations have infinite energy with probability 1 and, therefore, do not have a Fourier transform. The power spectral density of such a process can be found based on the Wiener-Khinchin theorem as the Fourier transform of the correlation function:

S x (f) = ∫ − ∞ ∞ k x (τ) e − i 2 π f τ d τ . (\displaystyle S_(x)(f)=\int \limits _(-\infty )^(\infty )k_(x)(\tau)e^(-i2\pi f\tau )d\tau .) (3)

If there is a direct transformation, then there is also an inverse Fourier transform, which determines from the known k x (τ) (\displaystyle k_(x)(\tau)):

k x (τ) = ∫ − ∞ ∞ S x (f) e i 2 π f τ d f . (\displaystyle k_(x)(\tau)=\int \limits _(-\infty )^(\infty )S_(x)(f)e^(i2\pi f\tau )df.) (4)

If we assume in formulas (3) and (4), respectively, f = 0 (\displaystyle f=0) and τ = 0 (\displaystyle \tau =0), we have

S x (0) = ∫ − ∞ ∞ k x (τ) d τ , (\displaystyle S_(x)(0)=\int \limits _(-\infty )^(\infty )k_(x)(\tau )d\tau ,) (5)
σ x 2 = k x (0) = ∫ − ∞ ∞ S x (f) d f . (\displaystyle \sigma _(x)^(2)=k_(x)(0)=\int \limits _(-\infty )^(\infty )S_(x)(f)df.) (6)

Formula (6), taking into account (2), shows that the variance determines the total energy of a stationary random process, which is equal to the area under the spectral density curve. Dimensional value S x (f) d f (\displaystyle S_(x)(f)df) can be interpreted as the fraction of energy concentrated in a small frequency range from f − d f / 2 (\displaystyle f-df/2) before f + d f / 2 (\displaystyle f+df/2). If understood by x (t) (\displaystyle x(t)) random (fluctuation) current or voltage, then the value S x (f) (\displaystyle S_(x)(f)) will have the dimension of energy [V 2 / Hz] = [V 2 s]. So S x (f) (\displaystyle S_(x)(f)) sometimes called energy spectrum. You can often find another interpretation in the literature: σ x 2 (\displaystyle \sigma _(x)^(2))- is considered as the average power released by the current or voltage at a resistance of 1 ohm. At the same time, the value S x (f) (\displaystyle S_(x)(f)) called power spectrum random process.

Spectral Density Properties

  • The energy spectrum of a stationary process (real or complex) is a non-negative value:
S x (f) ≥ 0 (\displaystyle S_(x)(f)\geq 0). (7)
  • The energy spectrum of a real stationary in the broad sense of a random process is a real and even function of frequency:
S x (− f) = S x (f) (\displaystyle S_(x)(-f)=S_(x)(f)). (8)
1. Signals and spectra. Theoretical foundations of digital communication

1. Signals and spectra

1.1. Signal processing in digital communications

1.1.1. Why "digital"

Why are "numbers" used in military and commercial communications systems? There are many reasons. The main advantage of this approach is the ease of reconstruction of digital signals compared to analog ones. Consider Fig. 1.1, which shows an ideal binary digital pulse propagating through a data channel. The waveform is affected by two main mechanisms: (1) since all channels and transmission lines have a non-ideal frequency response, the ideal pulse is distorted; and (2) unwanted electrical noise or other outside interference further distorts the waveform. The longer the channel, the more significantly these mechanisms distort the impulse (Fig. 1.1). While the transmitted pulse can still be reliably detected (before it degrades to an ambiguous state), the pulse is amplified by a digital amplifier, restoring its original ideal shape. The momentum is "reborn" or restored. Regenerative repeaters located in the communication channel at a certain distance from each other are responsible for signal restoration.

Digital channels are less susceptible to distortion and interference than analog channels. Because binary digital channels only give a meaningful signal when operating in one of two states—on or off—the disturbance must be large enough to move the channel's operating point from one state to the other. Having only two states facilitates signal recovery and therefore prevents the accumulation of noise or other disturbances during transmission. Analog signals, on the other hand, are not two-state signals; they can take an infinite number forms. In analog channels, even a small disturbance can unrecognizably distort the signal. Once an analog signal has been distorted, the disturbance cannot be removed by amplification. Since noise accumulation is inextricably linked to analog signals, as a result, they cannot be reproduced perfectly. With digital technology, the very low error rate plus the application of error detection and correction procedures make high signal fidelity possible. It remains only to note that such procedures are not available with analog technologies.

Fig.1.1. Distortion and momentum recovery

There are other important advantages of digital communication. Digital channels are more reliable and can be produced at lower prices than analog channels. In addition, digital software allows more flexible implementation than analog (eg, microprocessors, digital switching, and large-scale integrated circuits (LSI)). The use of digital signals and time-division multiplexing (TDM) is simpler than analog signals and frequency-division multiplexing (FDM). In transmission and switching, different types of digital signals (data, telegraph, telephone, television) can be considered identical: after all, a bit is a bit. In addition, for ease of switching and processing, digital messages can be grouped into autonomous units called packets. Digital technologies naturally incorporate features that protect against interference and signal suppression, or provide encryption or privacy. (Such technologies are discussed in Chapters 12 and 14.) In addition, communication is mainly between two computers, or between a computer and digital devices or a terminal. Such digital terminals are better (and more natural!) served by digital communication channels.

What do we pay for the benefits of digital communication systems? Digital systems require more processing than analog systems. In addition, digital systems require a significant amount of resources to be allocated for synchronization at various levels (see Chapter 10). Analog systems, on the other hand, are easier to synchronize. Another disadvantage of digital communication systems is that the degradation in quality is of a threshold nature. If the signal-to-noise ratio falls below a certain threshold, the quality of service may suddenly change from very good to very bad. In analog systems, however, degradation occurs more smoothly.

1.1.2. Typical box diagram and basic transformations

The functional block diagram shown in fig. 1.2 illustrates signal propagation and processing steps in a typical digital communications system (DCS). The upper blocks - formatting, source coding, encryption, channel coding, multiplexing, pulse modulation, bandpass modulation, spread spectrum and multiple access - reflect signal transformations on the way from source to transmitter. The lower blocks of the diagram are signal transformations on the way from the receiver to the recipient of information, and, in fact, they are opposite to the upper blocks. The modulation and demodulation/detection units are collectively referred to as a modem. The term "modem" often combines several signal processing steps, shown in Fig. 1.2; in this case, the modem can be thought of as the "brain" of the system. The transmitter and receiver can be seen as the "muscles" of the system. For wireless applications, a transmitter consists of a radio frequency (RF) upscaling circuit, a power amplifier, and an antenna, and a receiver consists of an antenna and a low-noise amplifier (LNA). Reverse frequency reduction is performed at the output of the receiver and/or demodulator.

On fig. 1.2 illustrates the correspondence between the blocks of the upper (transmitting) and lower (receiving) parts of the system. The signal processing steps that take place in the transmitter are predominantly the reverse of the receiver steps. On fig. 1.2 the source information is converted into binary digits (bits); the bits are then grouped into digital messages or message characters. Each such character ( where ) can be considered as an element of a finite alphabet containing M elements. Therefore, for M=2 the message symbol is binary (i.e. it consists of one bit). Although binary characters can be classified as M-ary (with M=2), usually the name " M-ary" is used for cases M>2; hence, such symbols consist of a sequence of two or more bits. (Compare the similar finite alphabet of DCS systems with what we have in analog systems, where the message signal is an element of an infinite set of possible signals.) For systems using channel coding (error correction codes), the sequence of message symbols is converted into a sequence of channel symbols characters), and each channel character is denoted by . Since message symbols or channel symbols can consist of a single bit or a group of bits, a sequence of such symbols is called a bit stream (Figure 1.2).

Consider the key blocks of signal processing shown in Fig. 1.2; only the formatting, modulation, demodulation/detection and synchronization steps are necessary for DCS systems.

Formatting converts the original information into bits, thus ensuring that the information and signal processing functions are compatible with the DCS system. From this point in the figure and up to the pulse modulation block, the information remains in the form of a bit stream.

Rice. 1.2. Block diagram of a typical digital communication system

Modulation is the process by which message symbols or channel symbols (if channel coding is used) are converted into signals that are compatible with the requirements imposed by the data channel. Pulse modulation is another necessary step because each symbol to be transmitted must first be converted from a binary representation (voltage levels represent binary 0s and 1s) to narrowband signal form. The term "narrowband" (baseband) defines a signal whose spectrum starts from (or near) the constant component and ends with some final value (usually, no more than a few megahertz). The PCM block typically includes filtering to minimize the transmission bandwidth. When pulse modulation is applied to binary symbols, the resulting binary signal is called a PCM (pulse-code modulation) encoded signal. There are several types of PCM signals (described in Chapter 2); in telephony applications, these signals are often referred to as channel codes. When pulse modulation is applied to non-binary symbols, the resulting signal is referred to as M-ary pulse-modulated. There are several types of such signals, which are also described in Chapter 2, which focuses on pulse-amplitude modulation (PAM). After pulse modulation, each message symbol or channel symbol takes the form of a bandpass signal, where . In any electronic implementation, the bit stream preceding the pulse modulation is represented by voltage levels. The question may arise why there is a separate block for pulse modulation, when in fact the voltage levels for binary zeros and ones can already be considered as ideal rectangular pulses, the duration of each of which is equal to the transmission time of one bit? There are two important differences between these voltage levels and the bandpass signals used for modulation. Firstly, the pulse modulation block allows the use of binary and M-ary signals. Section 2.8.2 describes the various useful parameters of these signal types. Secondly, the filtering performed in the pulse modulation block generates pulses whose duration is longer than the transmission time of one bit. Filtering allows you to use longer pulses; thus, the pulses are spread over adjacent bit time slots. This process is sometimes called pulse shaping; it is used to keep the transmission bandwidth within some desired region of the spectrum.

For applications involving radio frequency transmission, the next important step is bandpass modulation; it is necessary whenever the transmission medium does not support the propagation of pulsed signals. In such cases, the environment requires a bandpass signal, where . The term "bandpass" is used to reflect that a narrowband signal is shifted by a carrier wave at a frequency much greater than the spectral components. As the signal propagates through the channel, it is affected by the characteristics of the channel, which can be expressed in terms of the impulse response (see section 1.6.1). Also, at various points along the signal path, additional random noise distorts the received signal, so reception must be expressed in terms of a corrupted version of the signal from the transmitter. The received signal can be expressed as follows:

where the "*" sign represents the convolution operation (see Appendix A) and is the noise process (see section 1.5.5).

In the reverse direction, the receiver front end and/or the demodulator provide a frequency reduction for each bandpass signal. In preparation for detection, the demodulator reconstructs the narrowband signal as an optimal envelope. Usually, several filters are associated with the receiver and demodulator - filtering is done to remove unwanted high-frequency components (during the conversion of a bandpass signal to narrowband) and pulse shaping. Equalization can be described as a type of filtering used in the demodulator (or after the demodulator) to remove any signal degradation effects that may be caused by the channel. Equalization is necessary if the channel's impulse response is so bad that the received signal is severely distorted. An equalizer (equalizer) is implemented to compensate for (i.e. remove or attenuate) any signal distortion caused by the non-ideal response. Finally, the sampling step converts the shaped pulse into a sample to recover the (approximately) channel symbol or message symbol (if no channel coding is used). Some authors use the terms "demodulation" and "detection" interchangeably. In this book, demodulation refers to the restoration of a signal (bandwidth pulse), and detection refers to making a decision about the digital value of that signal.

The remaining stages of signal processing in the modem are optional and are aimed at meeting specific system needs. Source coding is the conversion of an analog signal to digital (for analog sources) and the removal of redundant (unnecessary) information. Note that a typical DCS system may use either source encoding (to digitize and compress the original information) or a simpler formatting transformation (to digitize only). The system cannot apply both source encoding and formatting at the same time, since the former already includes the necessary step of digitizing the information. Encryption, which is used to ensure the secrecy of communication, prevents an unauthorized user from understanding the message and introducing false messages into the system. Channel coding at a given data rate can reduce the PE error probability or reduce the signal-to-noise ratio required to obtain the desired PE probability by increasing the transmission bandwidth or complicating the decoder. The multiplexing and multiple access procedures combine signals that may have different characteristics or may come from different sources so that they can share some of the communication resources (eg, spectrum, time). Frequency spreading can provide a signal that is relatively immune to interference (both natural and intentional) and can be used to increase the privacy of the communicating parties. It is also a valuable technology used for multiple access.

Signal processing blocks shown in fig. 1.2 represent a typical diagram of a digital communication system; however, these blocks are sometimes implemented in a slightly different order. For example, multiplexing may occur prior to channel coding or modulation, or, in a two-stage modulation process (subcarrier and carrier), it may occur between two stages of modulation. Similarly, the frequency extension block can be located in various places in the top row of Fig. 1.2; its exact location depends on the specific technology used. Synchronization and its key element, the synchronizing signal, are involved in all stages of signal processing in the DCS system. For simplicity, the synchronization block in Fig. 1.2 is shown without regard to anything, although in fact he participates in the regulation of operations in almost every block shown in the figure.

On fig. Figure 1.3 shows the main signal processing functions (which can be thought of as signal transformations) divided into the following nine groups.

Fig.1.3. Major Digital Communications Transformations

1. Formatting and encoding the source

2. Narrowband signaling

3. Bandwidth signaling

4. Leveling

5. Channel coding

6. Sealing and multiple access

7. Spread spectrum

8. Encryption

9. Synchronization

On fig. 1.3 Narrowband signaling block contains a list of binary alternatives when using PCM modulation or line codes. This block also specifies a non-binary category of signals called M-ary pulse modulation. Another transformation in Fig. 1.3, labeled Bandwidth signaling, is divided into two main blocks, coherent and non-coherent. Demodulation is usually performed using reference signals. By using known signals as a measure of all signal parameters (especially phase), the demodulation process is said to be coherent; when phase information is not used, the process is said to be incoherent.

Channel coding is concerned with techniques used to improve digital signals, which as a result become less vulnerable to degradation factors such as noise, fading, and signal suppression. On fig. 1.3, channel coding is divided into two blocks, a waveform coding block and a structured sequence block. Waveform coding involves the use of new signals that bring improved detection quality over the original signal. Structured sequences include the use of additional bits to determine if there is an error caused by noise in the channel. One such technology, automatic repeat request (ARQ), simply recognizes the occurrence of an error and asks the sender to retransmit the message; another technique, known as forward error correction (FEC), allows for automatic error correction (with certain limitations). When considering structured sequences, we will discuss three common methods - block, convolutional, and turbo coding.

In digital communications, timing involves the calculation of both time and frequency. As shown in fig. 1.3, synchronization is performed at five levels. The reference frequencies of coherent systems need to be synchronized with the carrier (and possibly subcarrier) in frequency and phase. For non-coherent systems, phase synchronization is not necessary. The basic time synchronization process is symbol synchronization (or bit synchronization for binary symbols). The demodulator and detector must know when to start and end the symbol and bit detection process; synchronization error leads to a decrease in detection efficiency. The next level of time synchronization, frame synchronization, allows messages to be rearranged. And the last level, network synchronization, allows you to coordinate with other users in order to use resources efficiently.

1.1.3. Basic digital communication terminology

The following are some of the main terms commonly used in the field of digital communications.

A source of information(information source). A device that transmits information through the DCS system. The source of information can be analog or discrete. The output of an analog source can take on any value from a continuous range of amplitudes, while the output of a discrete information source can take values ​​from a finite set of amplitudes. Analog sources of information are converted to digital through sampling or quantization. Sampling and quantization methods called source formatting and coding (Figure 1.3).

Text message(text message). The sequence of characters (Fig. 1.4, a). In digital data transmission, a message is a sequence of numbers or characters belonging to a finite character set or alphabet.

Sign(Character). An element of the alphabet or character set (Fig. 1.4, b). The characters can be mapped to a sequence of binary digits. There are several standardized codes used for character encoding, including ASCII (American Standard Code for Information Interchange), EBCDIC (Extended Binary Coded Decimal Interchange Code), Hollerith code (Hollerith code), Baudot code, Murray code and Morse code.

Fig.1.4. Terms illustration: a) text messages; b) symbols;

c) bit stream (7-bit ASCII code); d) symbols, ;

e) bandpass digital signal

binary digit(binary digit) (bit) (bit). The fundamental unit of information for all digital systems. The term "bit" is also used as a unit of information, which is described in Chapter 9.

bit stream(bitstream). A sequence of binary digits (zeros and ones). A bitstream is often referred to as a baseband signal; this implies that its spectral components range from (or around) DC to some finite value, usually no more than a few megahertz. On fig. 1.4, the "HOW" message is represented using a seven-bit ASCII code, and the bit stream is shown in the form of bilevel pulses. The sequence of pulses is depicted by highly stylized (perfectly rectangular) waveforms with gaps between adjacent pulses. In a real system, pulses will never look like this, since such gaps are absolutely useless. At a given data rate, gaps will increase the bandwidth required for transmission; or, given the bandwidth, they will increase the time delay required to receive the message.

Symbol(symbol) (digital message) (digital message). A symbol is a group of k bits considered as a whole. Further, we will call this block a message symbol () from a finite set of symbols or alphabet (Fig. 1.4, d.) Size of the alphabet M equals , where k is the number of bits in a character. In narrowband transmission, each of the symbols will be represented by one of a set of narrowband pulse signals . Sometimes, when transmitting a sequence of such pulses, the baud unit (baud) is used to express the pulse rate (symbol rate). For a typical bandpass transmission, each pulse will be represented by one of a set of bandpass pulse signals . Thus, for wireless systems, a symbol is sent by transmitting a digital signal for T seconds. The next character is sent during the next time slot, T. The fact that the character set transmitted by the DCS system is finite is the main difference between these systems and analog communication systems. The DCS receiver only needs to determine which M possible signals has been transmitted; whereas an analog receiver must accurately determine the value that belongs to a continuous range of signals.

digital signal(digital waveform). Described by a voltage or current level, a signal (a pulse for narrowband transmission or a sine wave for bandpass transmission) representing a digital character. The characteristics of the signal (for pulses - amplitude, duration and location, or for a sinusoid - amplitude, frequency and phase) make it possible to identify it as one of the symbols of the finite alphabet. On fig. 1.4 d an example of a bandpass digital signal is shown. Although the signal is sinusoidal and therefore has an analog form, it is still called digital because it encodes digital information. In this figure, the digital value is indicated by transmission during each time interval T signal of a certain frequency.

Transfer rate(data rate). This value in bits per second (bps) is given by (bps) where k bits define a character from the - character alphabet, and T is the duration to-bit character.

1.1.4. Digital and analog performance benchmarks

The fundamental difference between analog and digital communication systems is related to the method of evaluating their performance. Analog system signals are on a continuum, so the receiver must work with an infinite number of possible signals. The performance measure of analog communications systems is accuracy, such as signal-to-noise ratio, percent distortion, or expected RMS error between transmitted and received signals.

Unlike analog, digital communication systems transmit signals that represent numbers. These digits form a finite set or alphabet, and this set is known a priori to the receiver. The criterion for the quality of digital communication systems is the probability of incorrect detection of a digit or the probability of an error ().

1.2. Signal classification

1.2.1. Deterministic and random signals

A signal can be classified as deterministic (when there is no uncertainty about its value at any point in time) or random otherwise. Deterministic signals are modeled by a mathematical expression. It is impossible to write such an expression for a random signal. However, when observing a random signal (also called a random process) for a sufficiently long period of time, some patterns can be noted that can be described in terms of probabilities and the statistical average. Such a model, in the form of a probabilistic description of a random process, is especially useful for describing the characteristics of signals and noise in communication systems.

1.2.2. Periodic and non-periodic signals

A signal is said to be periodic in time if there exists a constant , such that

for (1.2)

where through t time is marked. The smallest value that satisfies this condition is called the period of the signal. The period determines the duration of one full cycle of the function. A signal for which there is no value satisfying equation (1.2) is called non-periodic.

1.2.3. Analog and discrete signals

The analog signal is a continuous function of time, i.e. uniquely defined for all t. An electrical analog signal occurs when a physical signal (such as speech) is converted into an electrical signal by some device. In comparison, a discrete signal is a signal that exists at discrete intervals of time; it is characterized by a sequence of numbers defined for each point in time, kT, where k is an integer, and T- a fixed period of time.

1.2.4. Signals expressed in terms of energy or power

An electrical signal can be thought of as a change in voltage or current with instantaneous power applied to a resistance R:

In communication systems, power is often normalized (it is assumed that the resistance R is equal to 1 Ohm, although in a real channel it can be anything). If it is required to determine the actual power value, it is obtained by "denormalizing" the normalized value. In the normalized case, equations (1.3.a) and (1.3.6) have the same form. Therefore, regardless of whether the signal is represented by voltage or current, the normalized form allows us to express the instantaneous power as

where is either voltage or current. The dissipation of energy during the time interval () of a real signal with instantaneous power obtained using equation (1.4) can be written as follows.

(1.5)

The average power dissipated by the signal during this interval is as follows.

(1.6)

The performance of a communication system depends on the energy of the received signal; signals with higher energy are detected more reliably (with fewer errors) - the work of detection is performed by the received energy. On the other hand, power is the rate of energy input. This point is important for several reasons. Power determines the voltage to be applied to the transmitter and the strength of the electromagnetic fields to be considered in radio systems (i.e. the fields in the waveguides connecting the transmitter to the antenna and the fields around the radiating elements of the antenna).

When analyzing communication signals, it is often desirable to work with signal energy. We will call it an energy signal if and only if it has a non-zero finite energy at any moment of time (), where

(1.7)

In a real situation, we always transmit signals with finite energy (). However, to describe periodic signals, which by definition (Equation (1.2)) always exist and, therefore, have infinite energy, and to work with random signals that also have unlimited energy, it is convenient to define a class of signals expressed in terms of power. So, it is convenient to represent a signal using power if it is periodic and at any time has a non-zero final power (), where

(1.8)

A certain signal can be attributed to either energy or periodic. An energy signal has finite energy but zero average power, while a periodic signal has zero average power but infinite energy. The signal in the system can be expressed either in terms of its energy or periodic values. As a general rule, periodic and random signals are expressed in terms of power, and signals that are deterministic and non-periodic are expressed in terms of energy.

Signal energy and power are two important parameters in describing a communication system. The classification of a signal as either an energy signal or a periodic one is a convenient model that facilitates the mathematical treatment of various signals and noises. Section 3.1.5 develops these ideas in the context of digital communication systems.

1.2.5. Unit impulse function

A useful function in communication theory is the unit impulse, or Dirac delta function. The impulse function is an abstraction, an impulse with an infinite amplitude, zero width and unit weight (area under the impulse), concentrated at the point where the value of its argument is zero. The unit impulse is given by the following relations.

Unlimited at a point (1.11)

(1.12)

A unit impulse is not a function in the usual sense of the word. If it enters into any operation, it is convenient to consider it as a pulse of finite amplitude, unit area and non-zero duration, after which it is necessary to consider the limit as the pulse duration tends to zero. Graphically, it can be depicted as a peak located at a point whose height is equal to the integral of it or its area. Thus, with a constant BUT represents an impulse function whose area (or weight) is BUT, and the value is zero everywhere except for the point .

Equation (1.12) is known as the sifting (or quantizing) property of the unit impulse function; the integral of a unit impulse and an arbitrary function gives a sample of the function at the point .

1.3. Spectral density

The spectral density of a signal's characteristics is the distribution of the energy or power of a signal over a range of frequencies. This concept is of particular importance when considering filtering in communication systems. We need to be able to evaluate the signal and noise at the output of the filter. When conducting such an assessment, the energy spectral density (ESD) or power spectral density (power spectral density - PSD) is used.

1.3.1. Spectral energy density

The total energy of a real energy signal defined in the interval is described by equation (1.7). Using Parseval's theorem, we can relate the energy of such a signal expressed in the time domain to the energy expressed in the frequency domain:

, (1.13)

where is the Fourier transform of the non-periodic signal . (A summary of Fourier analysis can be found in Appendix A.) Denote by the rectangular amplitude spectrum defined as

(1.14)

The quantity is the spectral energy density (ESD) of the signal. Therefore, from equation (1.13) one can express the total energy by integrating the spectral density with respect to frequency.

(1.15)

This equation shows that the energy of the signal is equal to the area under the graph in the frequency domain. Spectral energy density describes the signal energy per unit bandwidth and is measured in J/Hz. The positive and negative frequency components give equal energy contributions, so, for a real signal, the value is an even function of frequency. Therefore, the spectral energy density is frequency symmetrical about the origin, and the total signal energy can be expressed as follows.

(1.16)

1.3.2. Power Spectral Density

The average power of a real signal in the periodic representation is determined by equation (1.8). If is a periodic signal with a period , it is classified as a signal in the periodic representation. The expression for the average power of a periodic signal is given by formula (1.6), where the time average is taken over one period.

(1.17a)

Parseval's theorem for a real periodic signal has the form

, (1.17,b)

where the terms are the complex coefficients of the Fourier series for a periodic signal (see Appendix A).

To use equation (1.17.6), it is only necessary to know the value of the coefficients . The power spectral density (PSD) of a periodic signal, which is a real, even, and non-negative function of frequency and gives the signal power distribution over a frequency range, is defined as follows.

(1.18)

Equation (1.18) defines the power spectral density of a periodic signal as a sequence of weighted delta functions. Therefore, the PSD of a periodic signal is a discrete function of frequency. Using the PSD defined in equation (1.18), one can write the average normalized power of the real signal.

(1.19)

Equation (1.18) describes the PSD of periodic signals only. If is a non-periodic signal, it cannot be expressed in terms of a Fourier series; if it is a non-periodic signal in the periodic representation (having infinite energy), it may not have a Fourier transform. However, we can still express the power spectral density of such signals in the limit. If we form a truncated version of a non-periodic signal in the periodic representation, taking for this only its values ​​from the interval (), then it will have a finite energy and the corresponding Fourier transform . It can be shown that the power spectral density of a non-periodic signal is defined as a limit.

(1.20)

Example 1.1. Average rated power

a) Find the average normalized signal strength using time averaging.

b) Perform item a by summing the spectral coefficients.

Decision

a) Using equation (1.17, a), we have the following.

b) Using equations (1.18) and (1.19), we obtain the following.

(see appendix A)

1.4. autocorrelation

1.4.1. Energy Signal Autocorrelation

Correlation is the process of matching; autocorrelation is the matching of a signal with its own delayed version. The autocorrelation function of a real energy signal is defined as follows.

for (1.21)

The autocorrelation function gives a measure of the similarity of a signal with its own copy, shifted by units of time. The variable plays the role of a scan or search parameter. is not a function of time; it's just a function of the time difference between the signal and its shifted copy.

The autocorrelation function of a real energy signal has the following properties.

1.

3. autocorrelation and ESD are Fourier transforms of each other, which is indicated by a double-headed arrow

4. the value at zero is equal to the signal energy

Upon satisfaction of paragraphs. 1-3 is an autocorrelation function. Condition 4 is a consequence of condition 3, so it is not necessary to include it in the main set to test for the autocorrelation function.

1.4.2. Autocorrelation of a Periodic Signal

The autocorrelation of a real periodic signal is defined as follows.

for (1.22)

If the signal is periodic with a period , the time average in equation (1.22) can be taken over one period , and the autocorrelation can be expressed as follows.

for (1.23)

The autocorrelation of a periodic signal that takes real values ​​has properties similar to those of an energy signal.

1. symmetry with respect to zero

2. for all, the maximum value is at zero

3. autocorrelation and ESD are Fourier transforms of each other

4.

1.5. random signals

The main task of a communication system is to transmit information over a communication channel. All useful message signals appear randomly, i.e. the receiver does not know in advance which of the possible message characters will be transmitted. In addition, due to various electrical processes, noise occurs that accompanies information signals. Therefore, we need an efficient way to describe random signals.

1.5.1. random variables

Let the random variable HA) represents a functional relationship between a random event BUT and a real number. For convenience of notation, we denote the random variable by X, and its functional dependence on BUT will be considered explicit. A random variable can be discrete or continuous. Distribution of a random variable X is found by the expression:

, (1.24)

where is the probability that the value is accepted; random variable X less than a real number X or equal to it. The distribution function has the following properties.

2. if

Another useful function related to the random variable X, is the probability density, which is written as follows.

(1.25,a)

As with the distribution function, the probability density is a function of a real number X. The name "density function" came from the fact that the probability of an event is equal to the following.

Using equation (1.25.6), we can approximately write down the probability that a random variable X has a value that belongs to a very small interval between and .

Thus, in the limit as tending to zero, we can write the following.

The probability density has the following properties.

2. .

Thus, the probability density is always non-negative and has a unit area. In the text of the book, we will use the notation to denote the probability density for a continuous random variable. For convenience of notation, we will often omit the index X and write simply. If a random variable X can take only discrete values, we will use the notation .

1.5.1.1. Ensemble mean

Mean value, or expected value, of a random variable X is defined by the expression

, (1.26)

where is called the expected value operator. moment n-th order probability distribution of a random variable X called the next value.

(1.27)

For the analysis of communication systems, the first two moments of the variable are important X. Yes, at n=1 equation (1.27) gives the moment considered above, and when n= 1 - root mean square value X.

(1.28)

One can also define central moments, which are the moments of the difference X and . The second order central moment (also called dispersion) is as follows.

Dispersion X also written as , and the square root of this value, , is called the standard deviation X. Dispersion is a measure of the "scatter" of a random variable X. Specifying the variance of a random variable limits the width of the probability density function. Dispersion and RMS are related by the following relationship.

Thus, the variance is equal to the difference between the root mean square and the square of the mean.

1.5.2. random processes

A random process can be viewed as a function of two variables: events BUT and time. On fig. 1.5 shows an example of a random process. Showing N sample functions of time. Each of the sample functions can be viewed as the output of a separate noise generator. For each event, we have a single time function (i.e. sample function). The set of all sample functions is called an ensemble. At any given time, , is a random variable whose value depends on the event. And the last, for a specific event and for a specific point in time, is a regular number. For convenience of notation, we will denote the random process as X(t), and the functional dependence on BUT will be considered explicit.

Fig.1.5. Random Noise Process

1.5.2.1. Statistical mean of a random process

Since the value of a random process at each subsequent point in time is unknown, a random process whose distribution functions are continuous can be described statistically in terms of a probability density. In general, at different times this function for a random process will have a different form. In most cases, it is unrealistic to empirically determine the probability distribution of a random process. At the same time, for the needs of communication systems, a partial description is often sufficient, including the mean and the autocorrelation function. So, let's define the average of the random process X(t) as

, (1.30)

where is a random variable obtained by considering a random process at time , a is the probability density (density over the ensemble of events at time ).

Let us define the autocorrelation function of the random process X(t) as a function of two variables and

where and are random variables obtained by considering X(t) at times and respectively. An autocorrelation function is a measure of the relationship between two time samples of a single random process.

1.5.2.2. stationarity

random process X(t) is called stationary in the strict sense if none of its statistics is affected by the transfer of the origin of time. A random process is called stationary in a broad sense if two of its statistics, the mean and the autocorrelation function, do not change when the origin of time is moved. Thus, a process is broadly stationary if

Stationarity in the strict sense implies stationarity in the broad sense, but not vice versa. Most of the useful results of communication theory are based on the assumption that random information signals and noise are stationary in a broad sense. From a practical point of view, a random process does not always have to be stationary, it is enough to be stationary in some observable time interval of practical interest.

For stationary processes, the autocorrelation function in equation (1.33) does not depend on time, but only on the difference . In other words, all pairs of values X(t) at times separated by the interval , have the same correlation value. Therefore, for stationary systems, the function can be written simply as .

1.5.2.3. Autocorrelation of random processes, stationary in the broad sense

Just as variance offers a measure of randomness for random variables, the autocorrelation function offers a similar measure for random processes. For processes that are stationary in the broad sense, the autocorrelation function depends only on the time difference .

For a broadly stationary process with zero mean, the function shows how statistically correlated are the random variables of the process separated by seconds. In other words, it gives information about the frequency response associated with the random process. If it changes slowly as it increases from zero to some value, this shows that, on average, the sample values X(t), taken at times and , are almost equal. Therefore, we have the right to expect that in the frequency representation X(t) low frequencies will dominate. On the other hand, if it rapidly decreases with increasing , one would expect that X(t) will change rapidly with time and hence will include predominantly high frequencies.

The autocorrelation function of a process that is stationary in the broad sense and takes real values ​​has the following properties.

1. symmetry with respect to zero

2. for all the maximum value is at zero

3. autocorrelation and power spectral density are Fourier transforms of each other

4. the value at zero is equal to the average signal strength

1.5.3. Time Averaging and Ergodicity

To calculate and by averaging over the ensemble, we need to average them over all sample functions of the process, and, therefore, we need complete information about the mutual distribution of probability density functions in the first and second approximations. In the general case, as a rule, such information is not available.

If a random process belongs to a special class called the class of ergodic processes, its time average is equal to the ensemble average, and the statistical properties of the process can be determined by averaging over time one sample function of the process. For a random process to be ergodic, it must be stationary in the strict sense (the reverse is not necessary). However, for communication systems, where stationarity in a broad sense is sufficient for us, we are only interested in the mean and the autocorrelation function.

A random process is said to be ergodic with respect to the mean if

(1.35)

and ergodic with respect to the autocorrelation function if

(1.36)

Testing a random process for ergodicity is usually quite difficult. In practice, as a rule, an intuitive assumption is used about the expediency of replacing ensemble averages with time averages. When analyzing most signals in communication channels (in the absence of impulse effects), it is reasonable to assume that random signals are ergodic with respect to the autocorrelation function. Since for ergodic processes the time averages are equal to the ensemble averages, fundamental electrical parameters, such as the amplitude of the DC component, the root mean square value and the average power, can be associated with the moments of the ergodic random process.

1. The value is equal to the DC component of the signal.

2. The value is equal to the normalized power of the DC component.

3. Moment of the second order X(t), , is equal to the total average normalized power.

4. The value is equal to the rms value of the signal expressed in terms of current or voltage.

5. The dispersion is equal to the average normalized power of the alternating signal.

6. If the process mean is zero (i.e. ), then , and the variance is equal to the rms value or (another wording) the variance represents the total power in the normalized load.

7. The standard deviation is the standard value of the variable signal.

8. If , then is the RMS value of the signal.

1.5.4. Power Spectral Density and Autocorrelation of a Stochastic Process

random process X(t) can be attributed to a periodic signal having such a power spectral density as indicated in equation (1.20). The function is especially useful in communication systems because it describes the distribution of signal power over a frequency range. The power spectral density allows you to estimate the power of the signal that will be transmitted through a network with known frequency characteristics. The main properties of the power spectral density functions can be formulated as follows.

1. always takes real values

2. for X(t) taking real values

3. autocorrelation and power spectral density are Fourier transforms of each other

4. relationship between average normalized power and power spectral density

On fig. 1.6 shows a visual representation of the autocorrelation function and the power spectral density function. What does the term "correlation" mean? When we are interested in the correlation of two phenomena, we ask how closely they are related in behavior or appearance and how much they coincide. In mathematics, the autocorrelation function of a signal (in the time domain) describes the correspondence of a signal to itself, displaced by some amount of time. An exact copy is considered to be created and localized at minus infinity. Then we sequentially move the copy in the positive direction of the time axis and ask how they (the original version and the copy) correspond to each other. Then we move the copy one more step in the positive direction and ask how much they match now, and so on. The correlation between two signals is depicted as a function of time, denoted by ; in this case, time can be considered as a scanning parameter.

On fig. 1.6 a-d the situation described above is depicted at some points in time. Rice. 1.6 a illustrates a single signal of a broadly stationary random process X(t). The signal is a random binary sequence with positive and negative (bipolar) pulses of unit amplitude. Positive and negative impulses appear with equal probability. The duration of each pulse (binary digit) is T seconds, and the average, or the value of the constant component of the random sequence, is zero. On fig. 1.6 b the same sequence is shown, shifted in time by seconds. According to the accepted notation, this sequence is denoted by . Let's assume the process X(t) is ergodic with respect to the autocorrelation function, so we can use time averaging instead of ensemble averaging to find. The value is obtained by multiplying two sequences X(t) and with the subsequent finding of the average using equation (1.36), which is valid for ergodic processes only in the limit. However, integration over an integer number of periods can give us some estimate of . Note what can be obtained by shifting X(t) both in positive and negative direction. A similar case is illustrated in Fig. 1.6 in, on which the original sample sequence is used (Fig. 1.6, a) and its shifted copy (Fig. 1.6, b). The shaded areas under the product curve contribute positively to the product, while the gray areas contribute negatively. Integration over the transmission time gives a point on the curve. The sequence can be further shifted by and each such shift will give a point on the overall autocorrelation function , shown in Fig. 1.6 G. In other words, each random sequence of bipolar pulses corresponds to an autocorrelation point on the general curve shown in Fig. 1.6 G. The maximum of the function is at a point (the best fit is when , equal to zero, since for all ), and the function falls off as . On fig. 1.6 G the points corresponding to and are shown.

The analytical expression for the autocorrelation function , shown in fig. 1.6 G, has the following form.

(1.37)

Note that the autocorrelation function gives us information about the frequency; it tells us something about the bandwidth of the signal. At the same time, autocorrelation is a temporal function; in formula (1.37) there are no terms depending on the frequency. So how does it give us bandwidth information?

Fig.1.6. Autocorrelation and power spectral density

Fig.1.6. Autocorrelation and power spectral density (end)

Assume that the signal is moving very slowly (the signal has a low bandwidth). If we shift the copy of the signal along the axis, asking at each stage of the shift the question of how much the copy and the original correspond to each other, the correspondence will be quite strong for a long time. In other words, the triangular autocorrelation function (Fig. 1.6, G and formula 1.37) will slowly decrease with increasing . Let us now assume that the signal is changing fast enough (i.e., we have a large band). In this case, even a small change will cause the correlation to be zero and the autocorrelation function to have a very narrow shape. Therefore, comparing the autocorrelation functions by shape gives us some information about the bandwidth of the signal. Does the function decrease gradually? In this case, we have a signal with a narrow band. Does the shape of the function resemble a narrow peak? Then the signal has a wide band.

The autocorrelation function allows you to explicitly express the power spectral density of a random signal. Since the power spectral density and the autocorrelation function are Fourier transforms of each other, the power spectral density, , of a random sequence of bipolar pulses can be found as the Fourier transform of the function , whose analytical expression is given in equation (1.37). To do this, you can use the table. A.1. notice, that

(1.38)

The general view of the function is shown in fig. 1.6 d.

Note that the area under the power spectral density curve represents the average signal power. One convenient measure of bandwidth is the width of the main spectral lobe (see Section 1.7.2). On fig. 1.6 d it is shown that the bandwidth of the signal is related to the reciprocal of the symbol duration or pulse width. Rice. 1.6 f-k formally repeat Fig. 1.6 hell, except that in the following figures the pulse duration is shorter. Note that for shorter pulses, the function is narrower (Fig. 1.6, and) than for longer ones (Fig. 1.6, G). On fig. 1.6 and; in other words, in the case of a shorter pulse duration, a shift of , is sufficient to create a null match or for a complete loss of correlation between the shifted sequences. Since in fig. 1.6 e pulse duration T less (higher pulse transfer rate) than in Fig. 1.6 a, the band occupancy in Fig. 1.6 to more band occupancy for the lower pulse frequency shown in fig. 1.6 d.

1.5.5. Noise in communication systems

The term "noise" refers to unwanted electrical signals that are always present in electrical systems. The presence of noise superimposed on the signal "obscures", or masks, the signal; this limits the receiver's ability to make accurate decisions about the meaning of the symbols, and therefore limits the information rate. The nature of noise is varied and includes both natural and artificial sources. Man-made noise is spark ignition noise, switching impulse noise and noise from other related sources of electromagnetic radiation. Natural noises come from the atmosphere, the sun, and other galactic sources.

Good engineering design can eliminate most noise or its unwanted effects through filtering, screening, modulation selection, and optimal receiver location. For example, sensitive radio astronomy measurements are usually carried out in remote desert areas, far from natural sources of noise. However, there is one natural noise, called thermal noise, which cannot be eliminated. Thermal noise is caused by the thermal motion of electrons in all dissipative components - resistors, conductors, etc. The same electrons that are responsible for electrical conductivity are also responsible for thermal noise.

Thermal noise can be described as a Gaussian random process with zero mean. Gaussian process n(t) is a random function, the value of which and at an arbitrary point in time t is statistically characterized by a Gaussian probability density function:

, (1.40)

where is the variance n. The normalized Gaussian process density function with zero mean is obtained under the assumption that . The schematically normalized probability density function is shown in fig. 1.7.

Here is a random signal, a- a signal in the communication channel, and n is a random variable expressing Gaussian noise. Then the probability density function is expressed as

, (1.41)

where, as above, is the variance n.

Fig.1.7. Normalized () Gaussian probability density function

The Gaussian distribution is often used as a model for the noise in a system, since there is a central boundary theorem, stating that, under very general conditions, the probability distribution of the sum j statistically independent random variables obey the Gaussian distribution, and the form of individual distribution functions does not matter. Thus, even if individual noise mechanisms will have a non-Gaussian distribution, the set of many such mechanisms will tend to a Gaussian distribution.

1.5.5.1. White noise

The main spectral characteristic of thermal noise is that its power spectral density is the same for all frequencies of interest in most communication systems; in other words, a thermal noise source radiates at all frequencies with equal power per unit bandwidth - from DC to a frequency of the order of Hz. Therefore, a simple thermal noise model assumes that its power spectral density is uniform for all frequencies, as shown in Fig. 1.8 a, and is written in the following form.

(1.42)

Here a factor of 2 is included to show that is the two-sided power spectral density. When the noise power has such a uniform spectral density, we call this noise white. The adjective "white" is used in the same sense as for white light, containing equal parts of all frequencies in the visible electromagnetic spectrum.

Fig.1.8. White noise: a) power spectral density;

b) autocorrelation function

The white noise autocorrelation function is given by the inverse Fourier transform of the noise power spectral density (see Table A.1) and is written as follows.

(1.43)

Thus, the autocorrelation of white noise is a delta function, weighted by a factor and located at the point , as shown in Fig. 1.8 b. Note that is equal to zero for , i.e., two different white noise samples are not correlated, no matter how close they are.

The average white noise power is infinite because the white noise bandwidth is infinite. This can be seen by obtaining the following expression from equations (1.19) and (1.42).

(1.44)

Although white noise is a very useful abstraction, no noise process can actually be white; however, the noise that appears in many real systems can presumably be considered white. We can observe such noise only after it has passed through a real system with a finite bandwidth. Therefore, as long as the bandwidth of the noise is substantially greater than the bandwidth used by the system, the noise can be considered to have an infinite bandwidth.

The delta function in equation (1.43) means that the noise signal n(t) is absolutely uncorrelated with its own biased version for any . Equation (1.43) shows that any two samples of the white noise process are not correlated. Since thermal noise is a Gaussian process and its samples are not correlated, the noise samples are also independent. Thus, the effect of an additive white Gaussian noise channel on the detection process is that the noise independently affects each transmitted symbol. Such a channel is called a memoryless channel. The term "additive" means that the noise is simply superimposed on or added to the signal - no multiplicative mechanisms exist.

Because thermal noise is present in all communications systems and is a significant source of noise for most systems, thermal noise characteristics (additive, white, and Gaussian) are often used to model noise in communications systems. Because zero-mean Gaussian noise is fully characterized by its variance, this model is particularly easy to use in signal detection and optimal receiver design. In this book, we will assume (unless otherwise stated) that the system is corrupted by additive white Gaussian noise with zero mean, although sometimes this simplification will be overly strong.

1.6. Signal transmission through line systems

Now that we have developed a set of signal and noise models, let's look at the characteristics of the systems and their effect on signals and noise. Since a system can be characterized with equal success in both the frequency and time domains, methods have been developed in both cases to analyze the response of a linear system to an arbitrary input signal. The signal applied to the input of the system (Fig. 1.9) can be described either as a time signal, , or through its Fourier transform, . The use of time analysis yields a time output, and in the process, the function, impulse response, or impulse response, of the network will be determined. When considering input in the frequency domain, we must determine the system's frequency response, or transfer function, which will determine the frequency output. It is assumed that the system is linear and invariant with respect to time. It is also assumed that the system has no latent energy at the moment the input signal is given.

Fig.1.9. Linear system and its key parameters

1.6.1. impulse response

The linear, time-invariant system or network shown in Fig. 1.9 is described (in the time domain) by the impulse response , which is the response of the system when a single pulse is applied to its input.

Consider the term "impulse response", which is extremely appropriate for this event. The description of the characteristics of a system through its impulse response has a direct physical interpretation. At the input of the system, we apply a single pulse (an unreal signal having infinite amplitude, zero width and unit area), as shown in Fig. 1.10, a. The supply of such an impulse to the system can be considered as an "instant impact". How will the system react (“respond”) to such an application of force (impulse)? The output signal is the impulse response of the system. (A possible form of this response is shown in Fig. 1.10, b.)

The response of the network to an arbitrary signal is a convolution with , which is written as follows.

(1.46)

Fig.1.10. Illustration of the concept of "impulse response": a) the input signal is a unit impulse function; b) the output signal is the impulse response of the system

Here, the "*" sign denotes a convolution operation (see clause A.5). The system is assumed to be causal, which means that there is no signal at the output until the time when the signal is applied to the input. Therefore, the lower bound of integration can be taken equal to zero, and the output can be expressed in a slightly different way.

(1.47,a)

or in the form

(1.47b)

The expressions in equations (1.46) and (1.47) are called convolution integrals. Convolution is a fundamental mathematical tool that plays an important role in understanding all communication systems. If the reader is not familiar with this operation, he should refer to Section A.5 for the derivation of equations (1.46) and (1.47).

1.6.2. Frequency transfer function

The frequency output is obtained by applying the Fourier transform to both sides of equation (1.46). Since convolution in the time domain becomes multiplication in the frequency domain (and vice versa), from equation (1.46) we get the following.

(It is assumed, of course, that for all .) Here , the Fourier transform of the impulse response, called the frequency transfer function, frequency response, or frequency response of the network. In general, the function is complex and can be written as

, (1.50)

where is the response modulus. The response phase is defined as follows.

(1.51)

(and denote the real and imaginary parts of the argument.)

The frequency transfer function of a linear, time-invariant network can be easily measured in the laboratory - in a network with a harmonic generator at the input and an oscilloscope at the output. If the input signal is expressed as

,

then the output can be written as follows.

The input frequency is shifted by the value we are interested in; thus, measurements at the input and output allow the species to be determined.

1.6.2.1. Stochastic processes and linear systems

If a random process forms the input of a linear, time-invariant system, then at the output of this system we also obtain a random process. In other words, each sample function of the input process gives a sample function of the output process. The input power spectral density and the output power spectral density are related by the following relationship.

(1.53)

Equation (1.53) provides a simple way to find the power spectral density at the output of a linear, time-invariant system when a random process is applied as input.

In Chapters 3 and 4, we will look at signal detection in Gaussian noise. The main property of Gaussian processes will be applied to a linear system. It will be shown that if a Gaussian process is fed to a time-invariant linear filter, then the random process , which is output, is also Gaussian.

1.6.3. Transmission without distortion

What is needed for a network to behave like an ideal transmission channel? The signal at the output of an ideal communication channel may be delayed in relation to the signal at the input; in addition, these signals can have different amplitudes (simple rescaling), but as for everything else - the signal should not be distorted, i.e. it must have the same shape as the input signal. Therefore, for an ideal undistorted transmission, we can describe the output signal as

, (1.54)

where and are constants. Applying the Fourier transform to both parts (see section A.3.1), we have the following.

(1.55)

Substituting expression (1.55) into equation (1.49), we see that the necessary transfer function of the system for transmission without distortion has the following form.

(1.56)

Therefore, to obtain an ideal transmission without distortion, the overall response of the system must have a constant modulus, and the phase shift must be linear in frequency. It is not enough that the system boosts or cuts all frequency components equally. All harmonics of the signal must arrive at the output with the same delay so that they can be summed. Since the delay is associated with phase shift and cyclic frequency by the relation

, (1.57,a)

it is obvious that, in order for the delay of all components to be the same, the phase shift must be proportional to the frequency. To measure signal distortion caused by delay, a characteristic called group delay is often used; it is defined as follows.

(1.57b)

Thus, for transmission without distortion, we have two equivalent requirements: the phase must be linear in frequency, or the group delay must be equal to a constant. In practice, the signal will be distorted as it passes through some parts of the system. To eliminate this distortion, phase or amplitude correction circuits (equalization) can be introduced into the system. In general, distortion is a general I/O characteristic of a system that determines its performance.

1.6.3.1. Ideal filter

It is unrealistic to build an ideal network described by equation (1.56). The problem is that Equation (1.56) assumes infinite bandwidth, with the system bandwidth determined by the range of positive frequencies in which the modulus has a given value. (In general, there are several measures of bandwidth; the most common ones are listed in Section 1.7.) As an approximation to an ideal network with infinite bandwidth, we choose a truncated network that passes without distortion all harmonics with frequencies between and where is the lower cutoff frequency, and is the upper, as shown in fig. 1.11. All such networks are called ideal filters. It is assumed that outside the range, which is called the passband (passband), the response amplitude of an ideal filter is zero. The effective bandwidth is determined by the filter bandwidth and is Hz.

If and , the filter is called transmissive (Fig. 1.11, a). If and has a finite value, it is called a low-pass filter (Fig. 1.11, b). If it has a non-zero value and , it is called a high-pass filter (Fig. 1.11, in).

Fig.1.11. Transfer function of ideal filters: a) ideal transmission filter; b) an ideal low-pass filter; c) ideal low-pass filter

Using equation (1.59) and assuming for an ideal low-pass filter with the Hz bandwidth shown in fig. 1.11 b, the transfer function can be written as follows.

(1.58)

The impulse response of an ideal low-pass filter, shown in Fig. 1.12 is expressed by the following formula.

Fig.1.12. Impulse response of an ideal low-pass filter

where the function is defined in equation (1.39). The impulse response shown in fig. 1.12 is non-causal; this means that at the moment the signal is applied to the input (), there is a non-zero response at the output of the filter. Thus, it should be obvious that the ideal filter described by equation (1.58) does not actually occur.

Example 1.2. Passing white noise through an ideal filter

White noise with power spectral density shown in Figure 1.8, a, is applied to the input of the ideal low-pass filter shown in Fig. 1.11 b. Determine the power spectral density and autocorrelation function of the output signal.

Decision

The autocorrelation function is the result of applying the inverse Fourier transform to the power spectral density. The autocorrelation function is determined by the following expression (see Table A.1).

Comparing the result obtained with formula (1.62), we see that it has the same form as the impulse response of an ideal low-pass filter shown in Fig. 1.12. In this example, an ideal low-pass filter converts the autocorrelation function of white noise (defined in terms of the delta function) to a function . After filtering, the system will no longer have white noise. The output noise signal will only have zero correlation with its shifted copies when shifted by , where is any non-zero integer.

1.6.3.2. Implemented filters

The simplest low-pass filter that can be implemented consists of a resistance (R) and a capacitance (C), as shown in Fig. 1.13 a; this filter is called an RC filter and its transfer function can be expressed as follows.

, (1.63)

where . The amplitude characteristic and the phase characteristic are shown in fig. 1.13 b, in. The bandwidth of the low pass filter is determined at the half power point; this point is the frequency at which the output signal power is half the maximum value, or the frequency at which the output voltage amplitude is equal to the maximum value.

In general, the half power point is expressed in decibels (dB) as the -3 dB point, or the point 3 dB below the maximum value. By definition, the value in decibels is determined by the ratio of powers, and .

(1.64, a)

Here and are voltages, a and are resistances. In communication systems, normalized power is usually used for analysis; in this case, the resistances and are considered equal to 1 ohm, then

Fig.1.13. RC filter and its transfer function: a) RC filter; b) amplitude characteristic of the RC filter; c) phase response of the RC filter

(1.64, b)

The amplitude response can be expressed in decibels as

, (1.64, in)

where and are the input and output voltages, and the input and output resistances are assumed to be equal.

From equation (1.63) it is easy to check that the half power point of the RC low pass filter corresponds to rad/s, or Hz. Thus, the bandwidth in hertz is . The filter form factor is a measure of how well a real filter approximates an ideal one. It is usually defined as the ratio of the -60 dB and -6 dB filter bandwidths. A sufficiently small form factor (about 2) can be obtained in a transmission filter with a very sharp cutoff. By comparison, the form factor of a simple RC low pass filter is around 600.

There are several useful approximations of the characteristic of an ideal low-pass filter. One of them is provided by the Butterworth filter, which approximates the ideal low-pass filter with the function

, (1.65)

where is the upper cutoff frequency (-3 dB) and is the order of the filter. The higher the order, the higher the complexity and cost of filter implementation. On fig. 1.14 shows amplitude graphs for several values. Note that as and grow, the amplitude characteristics approach the characteristics of an ideal filter. Butterworth filters are popular because they are the best approximation of the ideal case in terms of maximum filter bandwidth flatness.

Periodic continuation of an impulse. The concept of the spectral density of the signal. Inverse Fourier transform. The condition for the existence of the spectral density of the signal. Relationship between the pulse duration and the width of its spectrum. The generalized Rayleigh formula. Mutual spectral density of signals. Energy spectrum. Correlation analysis of signals. Comparison of signals shifted in time.

Purpose of the lecture:

Obtain spectral characteristics of non-periodic (impulse) signals by generalizing Fourier series. Determine the requirements for the bandwidth of the radio device. Represent signals in terms of their spectral densities. Use the energy spectrum to obtain various engineering estimates. Understand how the need arises for signals with specially selected properties.

Let s (t) be a single pulse signal of finite duration. Complementing it mentally with the same signals periodically following through a certain time interval T, we obtain the previously studied periodic sequence S per (t), which can be represented as a complex Fourier series

(12.1) with coefficients . (12.2)

In order to return to a single pulse signal, let us set the repetition period to infinity T. In this case, it is obvious:

a) the frequencies of neighboring harmonics nω 1 and (n+ l)ω 1 will be arbitrarily close, so that in formulas (12.1) and (12.2) the discrete variable nω 1 can be replaced by a continuous variable ω - the current frequency;

b) the amplitude coefficients C n will become infinitely small due to the presence of T in the denominator of formula (12.2).

Our task now is to find the limiting form of formula (12.1) as T→∞.

Let us consider a small frequency interval Δω, which forms a neighborhood of some selected frequency value ω 0 . Within this interval will contain N=Δω/ω 1 = ΔωT/(2π) individual pairs of spectral components, the frequencies of which differ as little as desired. Therefore, the components can be added as as if they all have the same frequency and are characterized by the same complex amplitudes

As a result, we find the complex amplitude of the equivalent harmonic signal, which reflects the contribution of all spectral components contained within the interval Δω

. (12.3)

Function (12.4)

is called spectral density signal s (t). Formula (12.4) implements Fourier transform this signal.

Let's solve the inverse problem of the spectral theory of signals: find the signal by its spectral density, which we will consider given.

Since, in the limit, the frequency intervals between adjacent harmonics decrease indefinitely, the last sum should be replaced by the integral

. (12.5)

This important formula is called inverse Fourier transform for the signal s(t).

Let us finally formulate the fundamental result: the signal s(t) and its spectral density S(ω) are one-to-one related by direct and inverse Fourier transforms

, (12.6)

.

The spectral representation of signals opens up a direct path to the analysis of the passage of signals through a wide class of radio circuits, devices and systems.

The signal s(t) can be associated with its spectral density s(ω) if this signal absolutely integrable, i.e. there is an integral

Such a condition significantly narrows the class of admissible signals. Thus, in the indicated classical sense, it is impossible to speak of the spectral density of a harmonic signal and(t) = U m cosω 0 t , existing throughout the infinite axis of time.

Important takeaway: the shorter the pulse duration, the wider its spectrum.

The spectrum width is understood as the frequency interval within which the modulus of the spectral density is not less than some predetermined level, for example, varies from |S| max , up to 0.1|S| max.

The product of the width of the pulse spectrum and its duration is a constant number that depends only on the shape of the pulse and, as a rule, has the order of unity: The shorter the pulse duration, the wider the bandwidth of the corresponding amplifier should be. Short impulse noise has a wide spectrum and can therefore degrade radio reception conditions in a large frequency band.

Mathematical models of many signals widely used in radio engineering do not satisfy the absolute integrability condition, so the Fourier transform method in its usual form is not applicable to them. However, we can talk about the spectral densities of such signals, if we assume that these densities are described by generalized functions.

Let two signals u(t) and v(t), generally complex-valued, defined by their inverse Fourier transforms.

Let's find the scalar product of these signals by expressing one of them, for example v(t), through its spectral density

The resulting relation is a generalized Rayleigh formula. An easily remembered interpretation of this formula is as follows: the scalar product of two signals, up to a coefficient, is proportional to the scalar product of their spectral densities. If the signals coincide identically, then the scalar product becomes equal to the energy

. (12.7)

Let's call mutual energy spectrum real signals u(t) and v(t) function

, (12.8)

such that

. (4.9)

It is easy to see that Re W UV(ω)-even, and Im W UV(ω)-odd frequency function. The integral (12.9) only contributes to the real part, so

. (12.10)

The last formula makes it possible to analyze the "fine structure" of the interconnection of signals.

Moreover, the generalized Rayleigh formula, presented in the form (12.10), indicates a fundamental way to reduce the degree of connection between two signals, achieving their orthogonality in the limit. To do this, one of the signals must be processed in a special physical system called frequency filter. This filter is required not to pass to the output the spectral components that are within the frequency interval, where the real part of the mutual energy spectrum is large. The frequency dependence of the transmission coefficient of such orthogonalizing filter will have a pronounced minimum within the indicated frequency range.

The spectral representation of the signal energy can be easily obtained from the generalized Rayleigh formula if the signals in it u(t) and v(t) consider the same. Formula (12.8), expressing the spectral energy density, takes the form

The value W u (ω) is called spectral energy density signal u(t), or, in short, his energy spectrum. Formula (3.2) will then be written as

. (12.12)

Relation (4.12) is known as Rayleigh formula(in the narrow sense), which states the following: the energy of any signal is the result of the summation of contributions from different intervals of the frequency axis.

When studying a signal using its energy spectrum, we inevitably lose information contained in the phase spectrum of the signal, since, in accordance with formula (4.11), the energy spectrum is the square of the modulus of the spectral density and does not depend on its phase.

Let us turn to a simplified idea of ​​the operation of a pulse radar designed to measure the range to a target. Here, information about the object of measurement is embedded in the value τ - the time delay between the probing and received signals. Probing forms and(t) and accepted and(t-τ) signals are the same for any delay. A block diagram of a radar signal processing device designed for ranging may look like the one shown in Figure 12.1.

Figure 12.1 - Device for measuring signal delay time

Consider the so-called energy form of the Fourier integral. In Chapter 5, formulas (7.15) and (7.16) were presented, which give the transition from the time function to the Fourier image and vice versa. If some random function of time x (s) is considered, then for it these formulas can be written in the form

and integrate over all

replace by expression (11.54):

The value in square brackets (11.57), as it is easy to see, is the original function of time (11.55). Therefore, the result is the so-called Rayleigh formula (Parseval's theorem), which corresponds to the energy form of the Fourier integral:

The right side of (11.58) and (11.39) is a quantity proportional to the energy of the process under consideration. So, for example, if we consider the current flowing through a certain resistor with resistance K, then the energy released in this resistor over time will be

Formulas (11.58) and (11.59) and express the energy form of the Fourier integral.

However, these formulas are inconvenient because for most processes the energy also tends to infinity over an infinite time interval. Therefore, it is more convenient to deal not with energy, but with the average power of the process, which will be obtained if the energy is divided by the observation interval. Then formula (11.58) can be represented as

Introducing the notation

is called the spectral density. Important

According to its physical meaning, the spectral density is a quantity that is proportional to the average power of the process in the frequency range from co to co + d?co.

In some cases, the spectral density is considered only for positive frequencies, doubling it at the same time, which can be done, since the spectral density is an even function of frequency. Then, for example, formula (11.62) should be written as

- spectral density for positive frequencies.

since in this case the formulas become more symmetrical.

A very important circumstance is that the spectral density and the correlation function of random processes are mutual Fourier transforms, i.e. they are connected by integral dependences of the type (11.54) and (11.55). This property is given without proof.

Thus, the following formulas can be written:

Since the spectral density and the correlation function are even real functions, sometimes formulas (11.65) and (11.66) are presented in a simpler form;

)

This follows from the fact that the equalities take place:

and the imaginary parts can be discarded after substitution into (11.65) and (11.66), since the real functions are on the left.

lies in the fact that the narrower the spectral density graph (Fig. 11.16, a), i.e., the lower the frequencies are represented in the spectral density, the slower the x value changes over time. On the contrary, the wider the graph of the spectral density (Fig. 11.16, b), i.e., the larger the frequencies represented in the spectral density, the finer the structure of the function x (r) and the faster the changes in time.

As can be seen from this consideration, the relationship between the type of spectral density and the type of time function is obtained inversely compared to the relationship between the correlation function and the process itself (Fig. 11.14). It follows from this that a narrower graph of the correlation function should correspond to a wider graph of the spectral density and vice versa.

And 8 (co). These functions, unlike the impulse functions discussed in Chapter 4, are even. This means that the function 8(m) is located symmetrically with respect to the origin and can be defined as follows;

A similar definition applies to function 8(co). Sometimes the normalized spectral density is introduced into consideration, which is the Fourier image of the normalized correlation function (11.52):

and hence

where O is the dispersion.

Mutual spectral densities are also a measure of the relationship between two random variables. In the absence of communication, the mutual spectral densities are equal to zero.

Let's look at some examples.

This function is shown in Fig. 11.17 a. The Fourier image corresponding to it on the basis of Table. 11.3 will

The spectrum of the process consists of a single peak of the impulse function type located at the origin of coordinates (Fig. 11.17, b).

This means that all the power of the process under consideration is concentrated at the bullet frequency, which is to be expected.

This function is shown in Fig. 11.18, a, In accordance with the table. 11.3 the spectral density will be

3. For a periodic function expanded in a Fourier series

in addition to the periodic part will contain a non-periodic component, then the spectrum of this function will contain, along with individual lines of the impulse function type, also a continuous part (Fig. 11.20). Individual peaks on the spectral density graph indicate the presence of hidden irregularities in the function under study.

does not contain a periodic part, then it will have a continuous spectrum without pronounced peaks.

Let us consider some stationary random processes that are important in the study of control systems. We will consider only centered

In this case, the mean square of the random variable will be equal to the variance:

taking into account the constant displacement in the control system is elementary.

(Fig. 11.21, a):

An example of such a process is the thermal noise of a resistor, which gives the level of the spectral density of the chaotic voltage across this resistor

absolute temperature.

Based on (11.68), the spectral density (11.71) corresponds to the correlation function

there is no correlation between subsequent and previous values ​​of the random variable x.

and hence infinite power.

To get a physically real process, it is convenient to introduce the concept of white noise with a limited spectral density (Fig. 11.21, b):

Bandwidth for spectral density.

This process corresponds to the correlation function

The RMS value of a random variable is proportional to the square root of the frequency band:

It is often more convenient to approximate dependence (11.73) with a smooth curve. For this purpose, you can, for example, use the expression

A factor that determines the bandwidth.

The process approaches white noise, so

as for these frequencies

Integration (11.77) over all frequencies makes it possible to determine the dispersion:

Therefore, the spectral density (11.77) can be written in another form:

Correlation function for this process

The correlation function is also shown in fig. 11.21, c.

The transition from one value to another is instantaneous. The time intervals obey the Poisson distribution law (11.4).

A graph of this type is obtained, for example, in the first approximation when tracking a moving target with a radar. A constant speed value corresponds to the movement of the target in a straight line. A change in the sign or magnitude of the speed corresponds to the maneuver of the target.

Will be the average value of the time interval during which the angular velocity remains constant. For radar, this value will be the average time the target moves in a straight line.

To determine the correlation function, it is necessary to find the average value of the product

When finding this work, there can be two cases.

belong to the same interval. Then the average value of the product of angular velocities will be equal to the mean square of the angular velocity or dispersion:

belong to different intervals. Then the average value of the product of velocities will be equal to the bullet:

since products with positive and negative signs will be equally probable. The correlation function will be equal to

The probability of finding them in different intervals.

Absence Probability

For time interval

since these events are independent.

As a result, for a finite interval Am we obtain

The sign of the module at m is set because the expression (11.80) must correspond to an even function. The expression for the correlation function coincides with (11.79). Therefore, the spectral density of the process under consideration must coincide with (11.78):

Note that, in contrast to (11.78), the spectral density formula (11.81) is written for the angular velocity of the process (Fig. 11.22). If we move from angular velocity to angle, then we get a non-stationary random process with a variance tending to infinity. However, in most cases, the servo system, at the input of which this process operates, has astaticism of the first and higher orders. Therefore, the first error coefficient c0 of the servo system is equal to zero, and its error will be determined only by the input speed and derivatives of higher orders, with respect to which the process is stationary. This makes it possible to use the spectral density (11.81) in calculating the dynamic error of the tracking system.

3. Irregular pitching. Some objects, such as ships, aircraft, and others, being under the influence of irregular disturbances (irregular waves, atmospheric disturbances, etc.), move according to a random law. perturbation frequencies that are close to their natural oscillation frequency. The resulting random movement of the object is called irregular rolling, in contrast to regular rolling, which is a periodic motion.

A typical chart of irregular pitching is shown in fig. 11.23. It can be seen from the consideration of this graph that, despite the random nature, this

motion is quite close to periodic.

In practice, the correlation function of irregular rolling is often approximated by the expression

Dispersion.

are usually found by processing experimental data (field tests).

The correlation function (11.82) corresponds to the spectral density (see Table 11.3)

The inconvenience of approximation (11.82) is that this formula can describe the behavior of any one quantity of irregular rolling (angle, angular velocity or angular acceleration), In this case, the value of O will correspond to the dispersion of the angle, velocity or acceleration.

If, for example, formula (11.82) is written for an angle, then this process will correspond to an irregular damask with a dispersion for angular velocities tending to infinity, i.e. it will be a physically unrealistic process.

A more convenient formula for approximating the pitch angle

However, this approximation also corresponds to a physically unrealistic process, since the dispersion of the angular acceleration turns out to tend to infinity.

To obtain the final dispersion of the angular acceleration, even more complex approximation formulas are required, which are not presented here.

Typical curves for the correlation function and the spectral density of irregular rolling are shown in Figs. 11.24.