Biographies Characteristics Analysis

International system of units of physical quantities. International System of Units (SI)

Kolchkov V.I. METROLOGY, STANDARDIZATION AND CERTIFICATION. M.: Tutorial

3. Metrology and technical measurements

3.3. International system of units of physical quantities

The harmonized International System of Units of Physical Quantities was adopted in 1960 by the XI General Conference on Weights and Measures. International system - SI (SI), SI- the initial letters of the French name Systeme International. The system provides a list of seven basic units: meter, kilogram, second, ampere, kelvin, candela, mole and two additional ones: radian, steradian, as well as prefixes for the formation of multiples and submultiples.

3.3.1 Basic SI units

  • Meter is equal to the length of the path traveled by light in vacuum in 1/299.792.458 of a second.
  • Kilogram equal to the mass of the international prototype of the kilogram.
  • Second is equal to 9.192.631.770 periods of radiation corresponding to the transition between two hyperfine levels of the ground state of the cesium-133 atom.
  • Ampere is equal to the strength of an electric current that does not change in time, which, when passing through two parallel rectilinear conductors of infinite length and negligible circular cross-sectional area, located at a distance of 1 m from one another in vacuum, causes an interaction force equal to 2 10 to the minus 7th power of N.
  • Kelvin equals 1/273.16 of the thermodynamic temperature of the triple point of water.
  • mole is equal to the amount of substance of a system containing as many structural elements as there are atoms in carbon-12 with a mass of 0.012 kg.
  • Candela equal to the luminous intensity in a given direction of a source emitting monochromatic radiation with a frequency of 540 10 to the 12th power of Hz, the luminous energy intensity of which in this direction is 1/683 W / sr.

Table 3.1. Basic and additional SI units

Basic SI units

Value

Designation

Name

Name

international

kilogram

The strength of the electric current I

thermodynamic
temperature

The power of light

Amount of substance

SI derived units

Value

Designation

Name

Name

international

flat corner

Solid angle

steradian

3.3.2. SI derived units

Derived units of the International System of Units are formed using the simplest equations between physical quantities, in which the numerical coefficients are equal to one. For example, to determine the dimension of linear speed, we use the expression for the speed of uniform rectilinear motion. If the distance traveled is v = l/t(m), and the time for which this path was passed - t(s), then the speed is obtained in meters per second (m/s). Therefore, the SI unit of speed - a meter per second - is the speed of a rectilinearly and uniformly moving point, at which it moves a distance of 1 m in 1 s. Other units are formed similarly, incl. with a coefficient not equal to one.

Table 3.2. SI derived units (see also Table 3.1)


SI derived units with their own names

Name

Expressing a derived unit in terms of SI units

Value

Name

Designation

other units

main and additional units

s–1

m kg s–2

Pressure

N/m2

m–1 kg s–2

energy, work,

m2 kg s–2

Power

m2 kg s–3

electr. charge

Electrical potential

m2 kg s–3 A–1

electr. capacity

m–2 kg–1 s4 A2

El.resistance

m2 kg s–3 A–2

electrical conductivity

m–2 kg–1 s3 A2

Flux of magnetic induction

m2 kg s–2 A–1

In principle, one can imagine any number of different systems of units, but only a few have become widespread. All over the world, for scientific and technical measurements, and in most countries in industry and everyday life, the metric system is used.

Basic units.

In the system of units for each measured physical quantity, an appropriate unit of measurement must be provided. Thus, a separate unit of measure is needed for length, area, volume, speed, etc., and each such unit can be determined by choosing one or another standard. But the system of units turns out to be much more convenient if in it only a few units are chosen as the main ones, and the rest are determined through the main ones. So, if the unit of length is a meter, the standard of which is stored in the State Metrological Service, then the unit of area can be considered a square meter, the unit of volume is a cubic meter, the unit of speed is a meter per second, etc.

The convenience of such a system of units (especially for scientists and engineers, who are much more familiar with measurements than other people) is that the mathematical relationships between the basic and derived units of the system turn out to be simpler. At the same time, a unit of speed is a unit of distance (length) per unit of time, a unit of acceleration is a unit of change in speed per unit of time, a unit of force is a unit of acceleration per unit of mass, etc. In mathematical notation, it looks like this: v = l/t, a = v/t, F = ma = ml/t 2. The presented formulas show the "dimension" of the quantities under consideration, establishing relationships between units. (Similar formulas allow you to define units for quantities such as pressure or electric current.) Such relationships are general and hold regardless of the units in which length is measured (meter, foot, or arshin) and which units are chosen for other quantities.

In engineering, the basic unit of measurement of mechanical quantities is usually taken not as a unit of mass, but as a unit of force. Thus, if in the system most used in physical research, a metal cylinder is taken as a standard of mass, then in a technical system it is considered as a standard of force that balances the force of gravity acting on it. But since the force of gravity is not the same at different points on the surface of the Earth, for the exact implementation of the standard, it is necessary to indicate the location. Historically, the location was taken at sea level at a geographic latitude of 45°. At present, such a standard is defined as the force necessary to give the indicated cylinder a certain acceleration. True, in technology, measurements are usually carried out with not such a high accuracy that it would be necessary to take care of variations in the force of gravity (if we are not talking about the calibration of measuring instruments).

A lot of confusion is associated with the concepts of mass, force and weight. The fact is that there are units of all these three quantities that have the same name. Mass is an inertial characteristic of a body, showing how difficult it is to be removed by an external force from a state of rest or uniform and rectilinear motion. A unit of force is a force that, acting on a unit of mass, changes its speed by a unit of speed per unit of time.

All bodies are attracted to each other. Thus, any body near the Earth is attracted to it. In other words, the Earth creates the force of gravity acting on the body. This force is called its weight. The force of weight, as mentioned above, is not the same at different points on the surface of the Earth and at different heights above sea level due to differences in gravitational attraction and in the manifestation of the rotation of the Earth. However, the total mass of a given amount of substance is unchanged; it is the same in interstellar space and at any point on Earth.

Precise experiments have shown that the force of gravity acting on different bodies (i.e. their weight) is proportional to their mass. Therefore, masses can be compared on a balance, and masses that are the same in one place will be the same in any other place (if the comparison is carried out in a vacuum to exclude the influence of the displaced air). If a certain body is weighed on a spring balance, balancing the force of gravity with the force of an extended spring, then the results of the weight measurement will depend on the place where the measurements are taken. Therefore, spring scales must be adjusted at each new location so that they correctly show the mass. The simplicity of the weighing procedure itself was the reason that the force of gravity acting on the reference mass was taken as an independent unit of measurement in technology. HEAT.

Metric system of units.

The metric system is the common name for the international decimal system of units, the basic units of which are the meter and the kilogram. With some differences in details, the elements of the system are the same all over the world.

Story.

The metric system grew out of the decrees adopted by the National Assembly of France in 1791 and 1795 to define the meter as one ten-millionth of the length of the earth's meridian from the North Pole to the equator.

By a decree issued on July 4, 1837, the metric system was declared mandatory for use in all commercial transactions in France. It has gradually supplanted local and national systems elsewhere in Europe and has been legally accepted in the UK and the US. An agreement signed on May 20, 1875 by seventeen countries created an international organization designed to preserve and improve the metric system.

It is clear that by defining the meter as a ten millionth of a quarter of the earth's meridian, the creators of the metric system sought to achieve invariance and exact reproducibility of the system. They took a gram as a unit of mass, defining it as the mass of one millionth of a cubic meter of water at its maximum density. Since it would not be very convenient to make geodetic measurements of a quarter of the earth's meridian with each sale of a meter of cloth or to balance a basket of potatoes in the market with an appropriate amount of water, metal standards were created that reproduce these ideal definitions with the utmost accuracy.

It soon became clear that metal standards of length could be compared with each other, introducing a much smaller error than when comparing any such standard with a quarter of the earth's meridian. In addition, it became clear that the accuracy of comparing metal mass standards with each other is much higher than the accuracy of comparing any such standard with the mass of the corresponding volume of water.

In this regard, the International Commission on the Meter in 1872 decided to take the “archival” meter stored in Paris “as it is” as the standard of length. Similarly, the members of the Commission took the archival platinum-iridium kilogram as the standard of mass, “considering that the simple ratio established by the creators of the metric system between a unit of weight and a unit of volume represents the existing kilogram with an accuracy sufficient for ordinary applications in industry and commerce, and accurate science needs not a simple numerical ratio of this kind, but an extremely perfect definition of this ratio. In 1875, many countries of the world signed an agreement on the meter, and this agreement established the procedure for coordinating metrological standards for the world scientific community through the International Bureau of Weights and Measures and the General Conference on Weights and Measures.

The new international organization immediately took up the development of international standards of length and mass and the transfer of their copies to all participating countries.

Length and mass standards, international prototypes.

International prototypes of standards of length and mass - meters and kilograms - were deposited with the International Bureau of Weights and Measures, located in Sevres, a suburb of Paris. The standard of the meter was a ruler made of an alloy of platinum with 10% iridium, the cross section of which was given a special X-shape to increase flexural rigidity with a minimum volume of metal. There was a longitudinal flat surface in the groove of such a ruler, and the meter was defined as the distance between the centers of two strokes applied across the ruler at its ends, at a standard temperature of 0 ° C. The mass of a cylinder made from the same platinum was taken as the international prototype of the kilogram. iridium alloy, which is the standard of the meter, with a height and diameter of about 3.9 cm. The weight of this standard mass, equal to 1 kg at sea level at a geographical latitude of 45 °, is sometimes called a kilogram-force. Thus, it can be used either as a standard of mass for the absolute system of units, or as a standard of force for the technical system of units, in which one of the basic units is the unit of force.

The International Prototypes were selected from a significant batch of identical standards made at the same time. The other standards of this batch were transferred to all participating countries as national prototypes (state primary standards), which are periodically returned to the International Bureau for comparison with international standards. Comparisons made at various times since then show that they show no deviations (from international standards) beyond the limits of measurement accuracy.

International SI system.

The metric system was very favorably received by scientists of the 19th century. partly because it was proposed as an international system of units, partly because its units were theoretically supposed to be independently reproducible, and also because of its simplicity. Scientists began deriving new units for the various physical quantities they were dealing with, based on the elementary laws of physics and relating these units to the units of length and mass in the metric system. The latter increasingly conquered various European countries, in which many unrelated units for different quantities used to be in circulation.

Although in all countries that adopted the metric system of units, the standards of metric units were almost the same, various discrepancies in derived units arose between different countries and different disciplines. In the field of electricity and magnetism, two separate systems of derived units have emerged: the electrostatic one, based on the force with which two electric charges act on each other, and the electromagnetic one, based on the force of the interaction of two hypothetical magnetic poles.

The situation became even more complicated with the advent of the so-called. practical electrical units, introduced in the middle of the 19th century. British Association for the Advancement of Science to meet the demands of rapidly developing wire telegraph technology. Such practical units do not coincide with the units of the two systems named above, but differ from the units of the electromagnetic system only by factors equal to integer powers of ten.

Thus, for such common electrical quantities as voltage, current and resistance, there were several options for the accepted units of measurement, and each scientist, engineer, teacher had to decide for himself which of these options he should use. In connection with the development of electrical engineering in the second half of the 19th and first half of the 20th centuries. more and more practical units were used, which eventually came to dominate the field.

To eliminate such confusion in the early 20th century. a proposal was put forward to combine practical electrical units with the corresponding mechanical units based on metric units of length and mass, and to build some kind of consistent (coherent) system. In 1960, the XI General Conference on Weights and Measures adopted a unified International System of Units (SI), defined the basic units of this system and prescribed the use of some derived units, "without prejudice to the question of others that may be added in the future." Thus, for the first time in history, an international coherent system of units was adopted by international agreement. It is now accepted as the legal system of units of measurement by most countries in the world.

The International System of Units (SI) is a harmonized system in which for any physical quantity, such as length, time or force, there is one and only one unit of measure. Some of the units are given specific names, such as the pascal for pressure, while others are named after the units from which they are derived, such as the unit of speed, the meter per second. The main units, together with two additional geometric ones, are presented in Table. 1. Derived units for which special names are adopted are given in Table. 2. Of all the derived mechanical units, the newton, the energy unit, the joule, and the power unit, the watt, are the most important. Newton is defined as the force that gives a mass of one kilogram an acceleration equal to one meter per second squared. A joule is equal to the work done when the point of application of a force equal to one Newton moves one meter in the direction of the force. A watt is the power at which work of one joule is done in one second. Electrical and other derived units will be discussed below. The official definitions of primary and secondary units are as follows.

A meter is the distance traveled by light in a vacuum in 1/299,792,458 of a second. This definition was adopted in October 1983.

The kilogram is equal to the mass of the international prototype of the kilogram.

A second is the duration of 9,192,631,770 periods of radiation oscillations corresponding to transitions between two levels of the hyperfine structure of the ground state of the cesium-133 atom.

Kelvin is equal to 1/273.16 of the thermodynamic temperature of the triple point of water.

A mole is equal to the amount of a substance that contains the same number of structural elements as there are atoms in the carbon-12 isotope with a mass of 0.012 kg.

A radian is a flat angle between two radii of a circle, the length of the arc between which is equal to the radius.

The steradian is equal to the solid angle with the vertex at the center of the sphere, which cuts out on its surface an area equal to the area of ​​a square with a side equal to the radius of the sphere.

For the formation of decimal multiples and submultiples, a number of prefixes and multipliers are prescribed, indicated in Table. 3.

Table 3 INTERNATIONAL SI DECIMAL MULTIPLES AND MULTIPLE UNITS AND MULTIPLIERS

exa deci
peta centi
tera Milli
giga micro

mk

mega nano
kilo pico
hecto femto
soundboard

Yes

atto

Thus, a kilometer (km) is 1000 m, and a millimeter is 0.001 m. (These prefixes apply to all units, such as kilowatts, milliamps, etc.)

Initially, one of the basic units was supposed to be the gram, and this was reflected in the names of the units of mass, but nowadays the basic unit is the kilogram. Instead of the name of megagrams, the word "ton" is used. In physical disciplines, for example, to measure the wavelength of visible or infrared light, a millionth of a meter (micrometer) is often used. In spectroscopy, wavelengths are often expressed in angstroms (Å); An angstrom is equal to one tenth of a nanometer, i.e. 10 - 10 m. For radiation with a shorter wavelength, such as x-rays, in scientific publications it is allowed to use a picometer and x-unit (1 x-unit = 10 -13 m). A volume equal to 1000 cubic centimeters (one cubic decimeter) is called a liter (l).

Mass, length and time.

All the basic units of the SI system, except for the kilogram, are now defined in terms of physical constants or phenomena, which are considered unchanged and reproducible with high accuracy. As for the kilogram, a method for its implementation with the degree of reproducibility that is achieved in the procedures for comparing various mass standards with the international prototype of the kilogram has not yet been found. Such a comparison can be carried out by weighing on a spring balance, the error of which does not exceed 1×10–8. The standards of multiples and submultiples for a kilogram are established by combined weighing on a balance.

Because the meter is defined in terms of the speed of light, it can be reproduced independently in any well-equipped laboratory. So, by the interference method, dashed and end gauges, which are used in workshops and laboratories, can be checked by comparing directly with the wavelength of light. The error with such methods under optimal conditions does not exceed one billionth (1×10–9). With the development of laser technology, such measurements have been greatly simplified and their range has been substantially extended.

Similarly, the second, in accordance with its modern definition, can be independently realized in a competent laboratory in an atomic beam facility. The beam atoms are excited by a high-frequency generator tuned to the atomic frequency, and the electronic circuit measures time by counting the oscillation periods in the generator circuit. Such measurements can be carried out with an accuracy of the order of 1×10 -12 - much better than was possible with previous definitions of the second, based on the rotation of the Earth and its revolution around the Sun. Time and its reciprocal, frequency, are unique in that their references can be transmitted by radio. Thanks to this, anyone with the appropriate radio receiving equipment can receive accurate time and reference frequency signals that are almost identical in accuracy to those transmitted on the air.

Mechanics.

temperature and warmth.

Mechanical units do not allow solving all scientific and technical problems without involving any other ratios. Although the work done when moving a mass against the action of a force and the kinetic energy of a certain mass are equivalent in nature to the thermal energy of a substance, it is more convenient to consider temperature and heat as separate quantities that do not depend on mechanical ones.

Thermodynamic temperature scale.

The thermodynamic temperature unit Kelvin (K), called the kelvin, is determined by the triple point of water, i.e. the temperature at which water is in equilibrium with ice and steam. This temperature is taken equal to 273.16 K, which determines the thermodynamic temperature scale. This scale, proposed by Kelvin, is based on the second law of thermodynamics. If there are two heat reservoirs at a constant temperature and a reversible heat engine transferring heat from one of them to the other in accordance with the Carnot cycle, then the ratio of the thermodynamic temperatures of the two reservoirs is given by T 2 /T 1 = –Q 2 Q 1 , where Q 2 and Q 1 - the amount of heat transferred to each of the reservoirs (the minus sign indicates that heat is taken from one of the reservoirs). Thus, if the temperature of the warmer reservoir is 273.16 K, and the heat taken from it is twice the heat transferred to another reservoir, then the temperature of the second reservoir is 136.58 K. If the temperature of the second reservoir is 0 K, then it no heat will be transferred at all, since all the energy of the gas has been converted into mechanical energy in the adiabatic expansion section of the cycle. This temperature is called absolute zero. The thermodynamic temperature, usually used in scientific research, coincides with the temperature included in the ideal gas equation of state PV = RT, where P- pressure, V- volume and R is the gas constant. The equation shows that for an ideal gas, the product of volume and pressure is proportional to temperature. For any of the real gases, this law is not exactly fulfilled. But if we make corrections for virial forces, then the expansion of gases allows us to reproduce the thermodynamic temperature scale.

International temperature scale.

In accordance with the above definition, the temperature can be measured with a very high accuracy (up to about 0.003 K near the triple point) by gas thermometry. A platinum resistance thermometer and a gas reservoir are placed in a heat-insulated chamber. When the chamber is heated, the electrical resistance of the thermometer increases and the gas pressure in the reservoir rises (in accordance with the equation of state), and when cooled, the reverse picture is observed. By simultaneously measuring resistance and pressure, it is possible to calibrate a thermometer according to gas pressure, which is proportional to temperature. The thermometer is then placed in a thermostat in which liquid water can be maintained in equilibrium with its solid and vapor phases. By measuring its electrical resistance at this temperature, a thermodynamic scale is obtained, since the temperature of the triple point is assigned a value equal to 273.16 K.

There are two international temperature scales - Kelvin (K) and Celsius (C). The Celsius temperature is obtained from the Kelvin temperature by subtracting 273.15 K from the latter.

Accurate temperature measurements using gas thermometry require a lot of work and time. Therefore, in 1968 the International Practical Temperature Scale (IPTS) was introduced. Using this scale, thermometers of various types can be calibrated in the laboratory. This scale was established using a platinum resistance thermometer, a thermocouple and a radiation pyrometer used in the temperature intervals between some pairs of constant reference points (temperature reference points). The MTS was supposed to correspond with the greatest possible accuracy to the thermodynamic scale, but, as it turned out later, its deviations are very significant.

Fahrenheit temperature scale.

The Fahrenheit temperature scale, which is widely used in combination with the British technical system of units, as well as in non-scientific measurements in many countries, is usually determined by two constant reference points - the temperature of ice melting (32 ° F) and the boiling point of water (212 ° F) at normal (atmospheric) pressure. Therefore, to get the Celsius temperature from the Fahrenheit temperature, subtract 32 from the latter and multiply the result by 5/9.

Heat units.

Since heat is a form of energy, it can be measured in joules, and this metric unit has been adopted by international agreement. But since the amount of heat was once determined by changing the temperature of a certain amount of water, a unit called a calorie and equal to the amount of heat needed to raise the temperature of one gram of water by 1 ° C has become widespread. Due to the fact that the heat capacity of water depends on temperature , I had to specify the value of the calorie. At least two different calories appeared - "thermochemical" (4.1840 J) and "steam" (4.1868 J). The “calorie” used in dieting is actually a kilocalorie (1000 calories). The calorie is not an SI unit and has fallen into disuse in most areas of science and technology.

electricity and magnetism.

All common electrical and magnetic units of measurement are based on the metric system. In accordance with modern definitions of electrical and magnetic units, they are all derived units derived from certain physical formulas from metric units of length, mass and time. Since most electrical and magnetic quantities are not so easy to measure using the standards mentioned, it was considered that it was more convenient to establish, by appropriate experiments, derived standards for some of the indicated quantities, and measure others using such standards.

SI units.

Below is a list of electrical and magnetic units of the SI system.

The ampere, the unit of electric current, is one of the six basic units of the SI system. Ampere - the strength of an unchanging current, which, when passing through two parallel straight conductors of infinite length with a negligibly small circular cross-sectional area, located in vacuum at a distance of 1 m from one another, would cause an interaction force equal to 2 × 10 on each section of the conductor 1 m long - 7 N.

Volt, unit of potential difference and electromotive force. Volt - electric voltage in a section of an electrical circuit with a direct current of 1 A with a power consumption of 1 W.

Coulomb, a unit of quantity of electricity (electric charge). Coulomb - the amount of electricity passing through the cross section of the conductor at a constant current of 1 A in a time of 1 s.

Farad, unit of electrical capacitance. Farad is the capacitance of a capacitor, on the plates of which, with a charge of 1 C, an electric voltage of 1 V arises.

Henry, unit of inductance. Henry is equal to the inductance of the circuit in which an EMF of self-induction of 1 V occurs with a uniform change in the current strength in this circuit by 1 A per 1 s.

Weber, unit of magnetic flux. Weber - a magnetic flux, when it decreases to zero in a circuit coupled to it, which has a resistance of 1 Ohm, an electric charge equal to 1 C flows.

Tesla, unit of magnetic induction. Tesla - magnetic induction of a uniform magnetic field, in which the magnetic flux through a flat area of ​​​​1 m 2, perpendicular to the lines of induction, is 1 Wb.

Practical standards.

Light and illumination.

The units of luminous intensity and illuminance cannot be determined on the basis of mechanical units alone. It is possible to express the energy flux in a light wave in W/m 2 and the intensity of a light wave in V/m, as in the case of radio waves. But the perception of illumination is a psychophysical phenomenon in which not only the intensity of the light source is essential, but also the sensitivity of the human eye to the spectral distribution of this intensity.

By international agreement, the candela (previously called a candle) is taken as a unit of luminous intensity, equal to the luminous intensity in a given direction of a source emitting monochromatic radiation with a frequency of 540 × 10 12 Hz ( l\u003d 555 nm), the energy strength of the light radiation of which in this direction is 1/683 W / sr. This roughly corresponds to the light intensity of the spermaceti candle, which once served as a standard.

If the luminous intensity of the source is one candela in all directions, then the total luminous flux is 4 p lumens Thus, if this source is located in the center of a sphere with a radius of 1 m, then the illumination of the inner surface of the sphere is equal to one lumen per square meter, i.e. one suite.

X-ray and gamma radiation, radioactivity.

Roentgen (R) is an obsolete unit of exposure dose of X-ray, gamma and photon radiation, equal to the amount of radiation, which, taking into account secondary electron radiation, forms ions in 0.001 293 g of air, carrying a charge equal to one CGS charge unit of each sign. In the SI system, the unit of absorbed radiation dose is the gray, which is equal to 1 J/kg. The standard of the absorbed dose of radiation is the installation with ionization chambers, which measure the ionization produced by radiation.



Under physical quantity understand the characteristic of physical objects or phenomena of the material world, which is general in qualitative terms for many objects or phenomena, but individual for each of them in quantitative terms. For example, mass is a physical quantity. It is a general characteristic of physical objects in qualitative terms, but quantitatively it has its own individual meaning for various objects.

Under value physical quantity understand its assessment, expressed as the product of an abstract number by the unit accepted for a given physical quantity. For example, in the expression for atmospheric air pressure R\u003d 95.2 kPa, 95.2 is an abstract number representing the numerical value of air pressure, kPa is the unit of pressure adopted in this case.

Under unit of physical quantity understand a physical quantity fixed in size and accepted as the basis for quantifying specific physical quantities. For example, meter, centimeter, etc. are used as units of length.

One of the most important characteristics of a physical quantity is its dimension. Dimension of a physical quantity reflects the relationship of a given quantity with the quantities taken as the main ones in the considered system of quantities.

The system of quantities, which is determined by the International System of Units SI and which is adopted in Russia, contains seven basic system quantities, presented in Table 1.1.

There are two additional SI units - radian and steradian, the characteristics of which are presented in Table 1.2.

From the basic and additional SI units, 18 derived SI units were formed, which were assigned special, mandatory names. Sixteen units are named after scientists, the other two are lux and lumen (see Table 1.3).

Special unit names may be used in the formation of other derived units. Derived units that do not have a special mandatory name are: area, volume, speed, acceleration, density, momentum, moment of force, etc.

Along with SI units, it is allowed to use decimal multiples and submultiples of them. Table 1.4 shows the names and designations of the prefixes of such units and their multipliers. Such prefixes are called SI prefixes.

The choice of one or another decimal multiple or submultiple unit is primarily determined by the convenience of its application in practice. In principle, such multiples and submultiples are chosen in which the numerical values ​​of the quantities are in the range from 0.1 to 1000. For example, instead of 4,000,000 Pa, it is better to use 4 MPa.

Table 1.1. Basic SI units

Value Unit
Name Dimension Recommended designation Name Designation Definition
international Russian
Length L l meter m m A meter is equal to the distance traveled in vacuum by a plane electromagnetic wave in 1/299792458 of a second km, cm, mm, µm, nm
Weight M m kilogram kg kg The kilogram is equal to the mass of the international prototype of the kilogram Mg, g, mg, mcg
Time T t second s with A second is equal to 9192631770 periods of radiation during the transition between two hyperfine levels of the ground state of the cesium-133 atom ks, ms, ms, ns
The strength of the electric current I I ampere BUT BUT The ampere is equal to the strength of the changing current, which, when passing through two parallel conductors of infinite length and an insignificantly small area of ​​\u200b\u200bcircular cross-section, located in vacuum at a distance of 1 m from one another, would cause an interaction force of 2 10 -7 on each section of the conductor 1 m long H kA, mA, µA, nA, pA
Thermodynamic temperature T kelvin* To To Kelvin is equal to 1/273.16 of the thermodynamic temperature of the triple point of water MK, kK, mK, MK
Amount of substance N n; n mole mol mole A mole is equal to the amount of substance of a system containing as many structural elements as there are atoms in carbon-12 weighing 0.012 kg kmol, mmol, µmol
The power of light J J candela cd cd The candela is equal to the intensity of light in a given direction of a source emitting monochromatic radiation with frequencies of 540 10 12 Hz, the radiation strength of which in this direction is 1/683 W / sr

* Excluding Kelvin temperature (symbol T) it is also possible to use Celsius temperature (symbol t) defined by the expression t = T- 273.15 K. Kelvin temperature is expressed in kelvins, and Celsius temperature is expressed in degrees Celsius (°C). The Kelvin temperature interval or difference is expressed in Kelvin only. The Celsius temperature interval or difference can be expressed both in kelvins and in degrees Celsius.

Table 1.2

Additional SI units

Value Unit Symbols for recommended multiples and submultiples
Name Dimension Recommended designation Defining Equation Name Designation Definition
international Russian
flat corner 1 a, b, g, q, n, j a = s /r radian rad glad A radian is equal to the angle between two radii of a circle, the length of the arc between which is equal to the radius mrad, mkrad
Solid angle 1 w, W W= S /r 2 steradian sr Wed The steradian is equal to the solid angle with the vertex at the center of the sphere, which cuts out on the surface of the sphere an area equal to the area of ​​a square with a side equal to the radius of the sphere

Table 1.3

SI derived units with special names

Value Unit
Name Dimension Name Designation
international Russian
Frequency T -1 hertz Hz Hz
Strength, weight LMT-2 newton N H
Pressure, mechanical stress, elastic modulus L -1 MT -2 pascal Pa Pa
Energy, work, amount of heat L2MT-2 joule J J
Power, energy flow L2MT-3 watt W Tue
Electric charge (amount of electricity) TI pendant With Cl
Electrical voltage, electrical potential, electrical potential difference, electromotive force L 2 MT -3 I -1 volt V AT
Electrical capacitance L -2 M -1 T 4 I 2 farad F F
Electrical resistance L 2 MT-3 I-2 ohm Ohm
electrical conductivity L -2 M -1 T 3 I 2 Siemens S Cm
Flux of magnetic induction, magnetic flux L 2 MT -2 I -1 weber wb wb
Magnetic flux density, magnetic induction MT -2 I -1 tesla T Tl
Inductance, mutual inductance L 2 MT-2 I-2 Henry H gn
Light flow J lumen lm lm
illumination L-2 J luxury lx OK
Nuclide activity in a radioactive source T-1 becquerel bq Bq
Absorbed radiation dose, kerma L 2 T-2 gray Gy Gr
Equivalent radiation dose L 2 T-2 sievert Sv Sv

Table 1.4

Names and designations of SI prefixes for the formation of decimal multiples and submultiples and their multipliers

Prefix name Prefix designation Factor
international Russian
exa E E 10 18
peta P P 10 15
tera T T 10 12
giga G G 10 9
mega M M 10 6
kilo k to 10 3
hecto* h G 10 2
deck* da Yes 10 1
deci* d d 10 -1
centi* c with 10 -2
Milli m m 10 -3
micro mk 10 -6
nano n n 10 -9
pico p P 10 -12
femto f f 10 -15
atto a a 10 -18

* The prefixes "hecto", "deca", "deci" and "santi" are allowed to be used only for units that are widely used, for example: decimeter, centimeter, decalitre, hectoliter.

MATHEMATICAL OPERATIONS WITH APPROXIMATE NUMBERS

As a result of measurements, as well as during many mathematical operations, approximate values ​​of the sought quantities are obtained. Therefore, it is necessary to consider a number of calculation rules with approximate values. These rules reduce the amount of computational work and eliminate additional errors. Approximate values ​​are such quantities as , logarithms, etc., various physical constants, measurement results.

As you know, any number is written using numbers: 1, 2, ..., 9, 0; while 1, 2, ..., 9 are considered significant digits. Zero can be either a significant digit if it is in the middle or end of a number, or an insignificant one if it is in a decimal fraction on the left side and indicates only the digit of the remaining digits.

When writing an approximate number, it should be borne in mind that the figures that make it up can be true, doubtful and incorrect. Number true, if the absolute error of the number is less than one unit of the digit of this digit (to the left of it, all digits will be correct). Doubtful call the number to the right of the correct number, and the numbers to the right of the dubious one unfaithful. Incorrect figures must be discarded not only in the result, but also in the original data. There is no need to round the number. When the error of a number is not indicated, then it should be considered that its absolute error is equal to half the unit digit of the last digit. The digit of the most significant digit of the error shows the digit of the doubtful digit in the number. Only true and doubtful digits can be used as significant digits, but if the error of the number is not indicated, then all digits are significant.

The following basic rule for writing approximate numbers should be applied (in accordance with ST SEV 543-77): an approximate number must be written with such a number of significant digits that guarantees the correctness of the last significant digit of the number, for example:

1) writing the number 4.6 means that only integers and tenths are correct (the true value of the number can be 4.64; 4.62; 4.56);

2) writing the number 4.60 means that the hundredths of the number are also correct (the true value of the number can be 4.604; 4.602; 4.596);

3) writing the number 493 means that all three digits are correct; if the last digit 3 cannot be vouched for, this number should be written as follows: 4.9 10 2;

4) when expressing the density of mercury 13.6 g / cm 3 in SI units (kg / m 3), one should write 13.6 10 3 kg / m 3 and one cannot write 13600 kg / m 3, which would mean the correctness of five significant figures , while only three correct significant digits are given in the original number.

The results of the experiments are recorded only in significant figures. A comma is placed immediately after the non-zero digit, and the number is multiplied by ten to the appropriate power. Zeros at the beginning or end of a number are usually not written down. For example, the numbers 0.00435 and 234000 are written as 4.35·10 -3 and 2.34·10 5 . Such a notation simplifies calculations, especially in the case of formulas that are convenient for taking logarithms.

Rounding a number (in accordance with ST SEV 543-77) is the rejection of significant digits to the right to a certain digit with a possible change in the digit of this digit.

When rounding, the last digit retained does not change if:

1) the first discarded digit, counting from left to right, is less than 5;

2) the first discarded digit, equal to 5, was the result of the previous rounding up.

When rounding, the last digit stored is increased by one if

1) the first discarded digit is greater than 5;

2) the first discarded digit, counting from left to right, is 5 (in the absence of previous roundings or in the presence of previous rounding down).

Rounding should be done all at once to the desired number of significant digits, rather than in stages, which can lead to errors.

GENERAL CHARACTERISTICS AND CLASSIFICATION OF SCIENTIFIC EXPERIMENTS

Each experiment is a combination of three components: the phenomenon under study (process, object), conditions and means of conducting the experiment. The experiment is carried out in several stages:

1) subject-substantive study of the process under study and its mathematical description based on the available a priori information, analysis and determination of the conditions and means of conducting the experiment;

2) creation of conditions for the experiment and the functioning of the object under study in the desired mode, providing the most effective observation of it;

3) collection, registration and mathematical processing of experimental data, presentation of processing results in the required form;

5) using the results of the experiment, for example, correcting the physical model of a phenomenon or object, using the model for forecasting, controlling or optimizing, etc.

Depending on the type of object (phenomenon) being studied, several classes of experiments are distinguished: physical, engineering, medical, biological, economic, sociological, etc. The most deeply developed general issues of conducting physical and engineering experiments in which natural or artificial physical objects (devices) are studied and the processes that take place in them. When conducting them, the researcher can repeatedly repeat measurements of physical quantities under similar conditions, set the desired values ​​of input variables, change them on a large scale, fix or eliminate the influence of those factors, the dependence on which is not currently being investigated.

Experiments can be classified according to the following criteria:

1) the degree of proximity of the object used in the experiment to the object in respect of which it is planned to obtain new information (field, bench or polygon, model, computational experiments);

2) the objectives of the conduct - research, testing (control), management (optimization, tuning);

3) the degree of influence on the conditions of the experiment (passive and active experiments);

4) the degree of human participation (experiments using automatic, automated and non-automated means of conducting the experiment).

The result of the experiment in a broad sense is a theoretical understanding of the experimental data and the establishment of laws and cause-and-effect relationships that make it possible to predict the course of phenomena of interest to the researcher, to choose such conditions under which it is possible to achieve the required or most favorable course of them. In a narrower sense, the result of an experiment is often understood as a mathematical model that establishes formal functional or probabilistic relationships between various variables, processes, or phenomena.

GENERAL INFORMATION ABOUT EXPERIMENTAL TOOLS

The initial information for constructing a mathematical model of the phenomenon under study is obtained using the means of conducting an experiment, which is a set of measuring instruments of various types (measuring devices, transducers and accessories), information transmission channels and auxiliary devices to ensure the conditions for conducting the experiment. Depending on the goals of the experiment, sometimes there are measuring information (research), measuring controlling (control, testing) and measuring control (control, optimization) systems, which differ both in the composition of the equipment and in the complexity of processing experimental data. The composition of measuring instruments is largely determined by the mathematical model of the described object.

Due to the increasing complexity of experimental studies, modern measuring systems include computing tools of various classes (computers, programmable microcalculators). These tools perform both the tasks of collecting and mathematical processing of experimental information, and the tasks of controlling the course of the experiment and automating the functioning of the measuring system. The effectiveness of the use of computing tools in experiments is manifested in the following main areas:

1) reducing the time for preparing and conducting the experiment as a result of accelerating the collection and processing of information;

2) increasing the accuracy and reliability of the results of the experiment based on the use of more complex and efficient algorithms for processing measuring signals, increasing the amount of experimental data used;

3) reduction in the number of researchers and the emergence of the possibility of creating automatic systems;

4) strengthening control over the course of the experiment and increasing the possibilities for its optimization.

Thus, modern means of conducting an experiment are, as a rule, measuring and computing systems (MCS) or complexes equipped with advanced computing tools. When substantiating the structure and composition of the TDF, it is necessary to solve the following main tasks:

1) determine the composition of the hardware of the IVS (measuring instruments, auxiliary equipment);

2) choose the type of computer that is part of the IVS;

3) establish communication channels between the computer, devices included in the hardware of the IVS, and the consumer of information;

4) develop IVS software.

2. PLANNING OF THE EXPERIMENT AND STATISTICAL PROCESSING OF THE EXPERIMENTAL DATA

BASIC CONCEPTS AND DEFINITIONS

Most studies are carried out to establish functional or statistical relationships between several quantities with the help of an experiment or to solve extreme problems. The classical method of setting up an experiment provides for fixing at the accepted levels of all variable factors, except for one, the values ​​of which change in a certain way in the area of ​​its definition. This method forms the basis of a one-factor experiment (such an experiment is often called passive). In a one-factor experiment, by varying one factor and stabilizing all the others at the selected levels, the dependence of the studied value on only one factor is found. By performing a large number of single-factor experiments in the study of a multi-factor system, frequency dependences are obtained, represented by many graphs that are illustrative. Particular dependencies found in this way cannot be combined into one large one. In the case of a one-factor (passive) experiment, statistical methods are used after the end of the experiments, when the data have already been obtained.

The use of a single-factor experiment for a comprehensive study of a multi-factor process requires a very large number of experiments. In some cases, their implementation requires considerable time, during which the influence of uncontrolled factors on the results of experiments can change significantly. For this reason, the data of a large number of experiments are incomparable. Hence it follows that the results of single-factor experiments obtained in the study of multi-factor systems are often of little use for practical use. In addition, when solving extremal problems, the data of a significant number of experiments turn out to be unnecessary, since they were obtained for a region far from the optimum. For the study of multifactorial systems, the most appropriate is the use of statistical methods of experiment planning.

Experiment planning is understood as the process of determining the number and conditions for conducting experiments that are necessary and sufficient to solve the problem with the required accuracy.

Experiment design is a branch of mathematical statistics. It discusses statistical methods for designing an experiment. These methods make it possible in many cases to obtain models of multifactorial processes with a minimum number of experiments.

The effectiveness of using statistical methods of experiment planning in the study of technological processes is explained by the fact that many important characteristics of these processes are random variables, the distributions of which closely follow the normal law.

The characteristic features of the experiment planning process are the desire to minimize the number of experiments; simultaneous variation of all studied factors according to special rules - algorithms; the use of a mathematical apparatus that formalizes many of the researcher's actions; choosing a strategy that allows you to make informed decisions after each series of experiments.

When planning an experiment, statistical methods are used at all stages of the study and, above all, before setting up the experiments, developing the experimental design, as well as during the experiment, when processing the results and after the experiment, making decisions about further actions. Such an experiment is called active and he assumes experiment planning .

The main advantages of the active experiment are related to the fact that it allows:

1) minimize the total number of experiments;

2) choose clear, logically substantiated procedures that are consistently performed by the experimenter during the study;

3) use a mathematical apparatus that formalizes many of the experimenter's actions;

4) simultaneously vary all variables and optimally use the factor space;

5) organize the experiment in such a way that many of the initial assumptions of the regression analysis are fulfilled;

6) obtain mathematical models that have better properties in some sense compared to models built from a passive experiment;

7) randomize the experimental conditions, i.e., turn numerous interfering factors into random variables;

8) evaluate the element of uncertainty associated with the experiment, which makes it possible to compare the results obtained by different researchers.

Most often, an active experiment is set up to solve one of two main problems. The first task is called extreme. It consists in finding the process conditions that provide the optimal value of the selected parameter. A sign of extremal problems is the requirement to find the extremum of some function (*illustrate with a graph*). Experiments that are set up to solve optimization problems are called extreme .

The second task is called interpolation. It consists in constructing an interpolation formula for predicting the values ​​of the studied parameter, which depends on a number of factors.

To solve an extremal or interpolation problem, it is necessary to have a mathematical model of the object under study. The object model is obtained using the results of experiments.

When studying a multifactorial process, setting up all possible experiments to obtain a mathematical model is associated with a huge laboriousness of the experiment, since the number of all possible experiments is very large. The task of planning an experiment is to establish the minimum required number of experiments and the conditions for their implementation, to choose methods for mathematical processing of the results, and to make decisions.

MAIN STAGES AND MODES OF STATISTICAL PROCESSING OF EXPERIMENTAL DATA

2. Drawing up an experiment plan, in particular, determining the values ​​of independent variables, choosing test signals, estimating the scope of observations. Preliminary substantiation and choice of methods and algorithms for statistical processing of experimental data.

3. Direct experimental research, collection of experimental data, their registration and input into a computer.

4. Preliminary statistical processing of data, intended, first of all, to check the fulfillment of the prerequisites underlying the chosen statistical method for constructing a stochastic model of the research object, and, if necessary, to correct the a priori model and change the decision on the choice of processing algorithm.

5. Drawing up a detailed plan for further statistical analysis of experimental data.

6. Statistical processing of experimental data (secondary, complete, final processing), aimed at building a model of the object of study, and statistical analysis of its quality. Sometimes at the same stage, the tasks of using the constructed model are also solved, for example: the parameters of the object are optimized.

7. Formal-logical and meaningful interpretation of the results of experiments, making a decision to continue or complete the experiment, summing up the results of the study.

Statistical processing of experimental data can be carried out in two main modes.

In the first mode, the full volume of experimental data is first collected and recorded, and only then they are processed. This type of processing is called off-line processing, a posteriori processing, data processing on a sample of the full (fixed) volume. The advantage of this processing mode is the possibility of using the entire arsenal of statistical methods for data analysis and, accordingly, the most complete extraction of experimental information from them. However, the efficiency of such processing may not satisfy the consumer, in addition, the control of the experiment is almost impossible.

In the second mode, observations are processed in parallel with their acquisition. This type of processing is called on-line-processing, processing of data on a sample of increasing volume, sequential data processing. In this mode, it becomes possible to express-analyze the results of the experiment and to quickly control its progress.

GENERAL INFORMATION ABOUT BASIC STATISTICAL METHODS

When solving problems of processing experimental data, methods are used based on two main components of the apparatus of mathematical statistics: the theory of statistical estimation of unknown parameters used in describing the model of the experiment, and the theory of testing statistical hypotheses about the parameters or nature of the analyzed model.

1. Correlation analysis. Its essence is to determine the degree of probability of a connection (as a rule, linear) between two or more random variables. These random variables can be input, independent variables. This set may also include the resulting (dependent variable). In the latter case, correlation analysis makes it possible to select factors or regressors (in the regression model) that have the most significant effect on the resulting trait. The selected values ​​are used for further analysis, in particular when performing regression analysis. Correlation analysis allows you to discover in advance unknown causal relationships between variables. At the same time, it should be borne in mind that the presence of a correlation between variables is only a necessary, but not a sufficient condition for the existence of causal relationships.

Correlation analysis is used at the stage of preliminary processing of experimental data.

2. Dispersion analysis. This method is intended for processing experimental data that depend on qualitative factors and for assessing the significance of the influence of these factors on the results of observations.

Its essence lies in the decomposition of the variance of the resulting variable into independent components, each of which characterizes the influence of a particular factor on this variable. Comparison of these components makes it possible to assess the significance of the influence of factors.

3. Regression analysis. Regression analysis methods make it possible to establish the structure and parameters of a model that links the quantitative resulting and factor variables, and to assess the degree of its consistency with experimental data. This type of statistical analysis allows solving the main problem of the experiment if the observed and resulting variables are quantitative, and in this sense it is the main one in processing this type of experimental data.

4. Factor analysis. Its essence lies in the fact that the "external" factors used in the model and strongly interconnected should be replaced by other, smaller "internal" factors that are difficult or impossible to measure, but which determine the behavior of "external" factors and thus the behavior resulting variable. Factor analysis makes it possible to put forward hypotheses about the structure of the relationship of variables without specifying this structure in advance and without having any preliminary information about it. This structure is determined by the results of observations. The resulting hypotheses can be tested in the course of further experiments. The task of factor analysis is to find a simple structure that accurately reflects and reproduces real, existing dependencies.

4. MAIN TASKS OF PRELIMINARY PROCESSING OF EXPERIMENTAL DATA

The ultimate goal of preliminary processing of experimental data is to put forward hypotheses about the class and structure of the mathematical model of the phenomenon under study, to determine the composition and volume of additional measurements, and to choose possible methods for subsequent statistical processing. To do this, it is necessary to solve some particular problems, among which the following can be distinguished:

1. Analysis, rejection and recovery of anomalous (erroneous) or missed measurements, since experimental information is usually of non-uniform quality.

2. Experimental verification of the laws of distribution of the obtained data, estimation of the parameters and numerical characteristics of the observed random variables or processes. The choice of post-processing methods aimed at constructing and testing the adequacy of a mathematical model for the phenomenon under study depends significantly on the law of distribution of the observed quantities.

3. Compression and grouping of initial information with a large amount of experimental data. At the same time, the features of their distribution laws, which were identified at the previous stage of processing, should be taken into account.

4. Combining several groups of measurements obtained, possibly at different times or under different conditions, for joint processing.

5. Identification of statistical relationships and mutual influence of various measured factors and resulting variables, successive measurements of the same values. The solution of this problem allows you to select those variables that have the strongest influence on the resulting feature. The selected factors are used for further processing, in particular, by regression analysis methods. Analysis of correlations makes it possible to put forward hypotheses about the structure of the relationship of variables and, ultimately, about the structure of the phenomenon model.

Pre-processing is characterized by an iterative solution of the main tasks, when they repeatedly return to the solution of a particular problem after obtaining the results at the subsequent stage of processing.

1. CLASSIFICATION OF MEASUREMENT ERRORS.

Under measurement understand finding the value of a physical quantity experimentally using special technical means. Measurements can be direct when the desired value is found directly from the experimental data, and indirect when the desired value is determined on the basis of a known relationship between this value and the quantities subjected to direct measurements. The value of the quantity found by the measurement is called measurement result .

The imperfection of measuring instruments and human senses, and often the nature of the measured quantity itself, leads to the fact that with any measurements, the results are obtained with a certain accuracy, i.e., the experiment does not give the true value of the measured quantity, but only its approximate value. Under actual value of a physical quantity is understood to be its value, found experimentally and so close to the true value that for this purpose it can be used instead of it.

The measurement accuracy is determined by the proximity of its result to the true value of the measured quantity. The accuracy of the instrument is determined by the degree of approximation of its readings to the true value of the desired value, and the accuracy of the method is determined by the physical phenomenon on which it is based.

Mistakes (errors) measurements characterized by the deviation of the measurement results from the true value of the measured quantity. The measurement error, like the true value of the measured quantity, is usually unknown. Therefore, one of the main tasks of statistical processing of the results of the experiment is the assessment of the true value of the measured value according to the obtained experimental data. In other words, after repeatedly measuring the sought value and obtaining a series of results, each of which contains some unknown error, the task is to calculate the approximate value of the sought value with the smallest possible error.

Measurement errors are divided by rough errors (misses), systematic and random .

Gross mistakes. Gross errors arise as a result of a violation of the basic conditions of measurement or as a result of an oversight by the experimenter. If a gross error is detected, the measurement result should be immediately discarded and the measurement repeated. An external sign of a result containing a gross error is its sharp difference in magnitude from the rest of the results. This is the basis for some criteria for eliminating gross errors in terms of their magnitude (to be discussed below), however, the most reliable and effective way to reject incorrect results is to reject them directly in the measurement process itself.

Systematic errors. A systematic error is such an error that remains constant or regularly changes with repeated measurements of the same quantity. Systematic errors appear due to incorrect adjustment of instruments, inaccuracy of the measurement method, any omission of the experimenter, use of inaccurate data for calculation.

Systematic errors also occur in complex measurements. The experimenter may not be aware of them, although they may be very large. Therefore, in such cases, it is necessary to carefully analyze the measurement technique. Such errors can be detected, in particular, by measuring the desired value by another method. The coincidence of the results of measurements by both methods serves as a certain guarantee of the absence of systematic errors.

When measuring, every effort must be made to eliminate systematic errors, as they can be so large that they greatly distort the results. The identified errors are eliminated by the introduction of amendments.

Random bugs. A random error is a component of the measurement error that changes randomly, i.e. it is a measurement error that remains after the elimination of all identified systematic and gross errors. Random errors are caused by a large number of both objective and subjective factors that cannot be singled out and taken into account separately. Since the causes leading to random errors are not the same and cannot be taken into account in each experiment, such errors cannot be excluded, one can only estimate their significance. Using the methods of probability theory, one can take into account their influence on the assessment of the true value of the measured quantity with a much smaller error than the errors of individual measurements.

Therefore, when the random error is greater than the error of the measuring instrument, it is necessary to repeat the same measurement many times to reduce its value. This allows minimizing the random error and making it comparable with the error of the instrument. If the random error is less than the error of the device, then it does not make sense to reduce it.

In addition, errors are divided into absolute , relative and instrumental. The absolute error is the error expressed in units of the measured value. Relative error is the ratio of the absolute error to the true value of the measured quantity. The component of the measurement error, which depends on the error of the measuring instruments used, is called the instrumental measurement error.


2. ERRORS OF DIRECT EQUAL MEASUREMENTS. LAW OF NORMAL DISTRIBUTION.

Direct measurements- These are such measurements when the value of the studied quantity is found directly from experimental data, for example, by taking the readings of an instrument that measures the value of the desired quantity. To find the random error, the measurement must be carried out several times. The results of such measurements have close error values ​​and are called equivalent .

Let as a result n measurements of quantity X, carried out with the same accuracy, a number of values ​​were obtained: X 1 , X 2 , …, X n. As shown in error theory, closest to the true value X 0 measured value X is an arithmetic mean

The arithmetic mean is considered only as the most probable value of the quantity being measured. The results of individual measurements generally differ from the true value X 0 . However, the absolute error i th dimension is

D x i " = X 0 – x i 4

and can take both positive and negative values ​​with equal probability. Summing up all the errors, we get

,


. (2.2)

In this expression, the second term on the right-hand side for large n is equal to zero, since any positive error can be associated with a negative one equal to it. Then X 0 =. With a limited number of measurements, there will be only an approximate equality X 0 . Thus, it can be called a real value.

In all practical cases, the value X 0 is unknown and there is only a certain probability that X 0 is in some interval close and it is required to determine this interval corresponding to this probability. As an estimate of the absolute error of a single measurement, use D x i = – x i .

It determines the accuracy of a given measurement.

For a number of measurements, the arithmetic mean error is determined

.

It defines the limits within which more than half of the dimensions lie. Hence, X 0 with a sufficiently high probability falls into the interval from –h to +h. Value measurement results X is then written as:

Value X measured the more accurately, the smaller the interval in which the true value is located X 0 .

Absolute measurement error D x by itself does not yet determine the accuracy of measurements. Let, for example, the accuracy of some ammeter is 0.1 a. Current measurements were made in two electrical circuits. In this case, the following values ​​were obtained: 320.1 a and 0.20.1 a. It can be seen from the example that although the absolute measurement error is the same, the measurement accuracy is different. In the first case, the measurements are quite accurate, and in the second, they allow one to judge only about the order of magnitude. Therefore, when evaluating the quality of a measurement, it is necessary to compare the error with the measured value, which gives a better idea of ​​the accuracy of the measurements. For this, the concept relative error

d x=D x /. (2.3)

The relative error is usually expressed as a percentage.

Since in most cases the measured quantities have a dimension, then the absolute errors are dimensional, and the relative errors are dimensionless. Therefore, with the help of the latter, it is possible to compare the accuracy of measurements of dissimilar quantities. Finally, the experiment must be set up in such a way that the relative error remains constant over the entire measurement range.

It should be noted that with correct and carefully performed measurements, the arithmetic mean error of their result is close to the error of the measured instrument.

If the measurements of the desired value X carried out many times, then the frequency of occurrence of a particular value X i can be represented as a graph in the form of a stepped curve - a histogram (see Fig. 1), where at is the number of readings; D x i = X ix i +1 (i changes from - n to + n). With an increase in the number of measurements and a decrease in the interval D x i the histogram turns into a continuous curve characterizing the density of the probability distribution that the value x i will be in the interval D x i .


Under random variable distribution understand the totality of all possible values ​​of a random variable and their corresponding probabilities. The law of distribution of a random variable any correspondence of a random variable to the possible values ​​of their probabilities is called. The most general form of the distribution law is the distribution function R (X).

Then the function R (X) =R" (X) – probability distribution density or differential distribution function. A probability density plot is called a distribution curve.

Function R (X) is characterized by the fact that the product R (X)dx there is a probability of being a separate, randomly selected value of the measured value in the interval ( X ,x + dx).

In the general case, this probability can be determined by various distribution laws (normal (Gauss), Poisson, Bernoulli, binomial, negative binomial, geometric, hypergeometric, uniform discrete, negative exponential). However, most often the probability of occurrence of the value x i in the interval ( X ,x + dx) in physical experiments are described by the normal distribution law - the Gauss law (see Fig. 2):

, (2.4)

where s 2 is the population variance. General population name the whole set of possible measurement values x i or possible error values ​​D x i .

The widespread use of Gauss's law in error theory is explained by the following reasons:

1) errors equal in absolute value occur equally often with a large number of measurements;

2) errors that are small in absolute value are more common than large ones, i.e., the probability of an error occurring is the smaller, the greater its absolute value;

3) measurement errors take a continuous series of values.

However, these conditions are never strictly met. But experiments have confirmed that in the region where the errors are not very large, the normal distribution law is in good agreement with the experimental data. Using the normal law, you can find the probability of an error of a particular value.

The Gaussian distribution is characterized by two parameters: the mean value of the random variable and the variance s 2 . The mean value is determined by the abscissa ( X=) the axis of symmetry of the distribution curve, and the variance shows how quickly the probability of an error decreases with an increase in its absolute value. The curve has a maximum at X=. Therefore, the average value is the most probable value of the quantity X. The dispersion is determined by the half-width of the distribution curve, i.e., the distance from the symmetry axis to the inflection points of the curve. It is the mean square of the deviation of the results of individual measurements from their arithmetic mean over the entire distribution. If, when measuring a physical quantity, only constant values ​​\u200b\u200bare obtained X=, then s 2 = 0. But if the values ​​of the random variable X take values ​​not equal to , then its variance is non-zero and positive. Dispersion thus serves as a measure of fluctuations in the values ​​of a random variable.

The measure of the dispersion of the results of individual measurements from the mean value must be expressed in the same units as the values ​​of the measured quantity. In this regard, the quantity

called mean square error .

It is the most important characteristic of the measurement results and remains constant under the same experimental conditions.

The value of this quantity determines the shape of the distribution curve.

Since the area under the curve, while remaining constant (equal to unity), changes its shape as s changes, the distribution curve stretches upward near the maximum at s with decreasing s. X=, and shrinking in the horizontal direction.

As s increases, the value of the function R (X i) decreases, and the distribution curve stretches along the axis X(see Fig. 2).

For a normal distribution law, the root mean square error of a single measurement

, (2.5)

and the mean square error of the mean value

. (2.6)

The root-mean-square error characterizes the measurement errors more accurately than the arithmetic mean error, since it is obtained quite strictly from the law of distribution of random error values. In addition, its direct connection with the variance, the calculation of which is facilitated by a number of theorems, makes the mean square error a very convenient parameter.

Along with the dimensional error s, the dimensionless relative error d s =s/ is also used, which, like d x, is expressed either in fractions of a unit or as a percentage. The final measurement result is written as:

However, in practice it is impossible to take too many measurements, so it is impossible to build a normal distribution to accurately determine the true value X 0 . In this case, a good approximation to the true value can be considered, and a fairly accurate estimate of the measurement error is the sample variance, which follows from the normal distribution law, but refers to a finite number of measurements. This name of the quantity is explained by the fact that from the whole set of values X i, i.e., the general population is chosen (measured) only by a finite number of values ​​of the quantity X i(equal to n), called sampling. The sample is already characterized by the sample mean and the sample variance.

Then the sample mean square error of a single measurement (or empirical standard)

, (2.8)

and the sample mean square error of a series of measurements

. (2.9)

It can be seen from expression (2.9) that by increasing the number of measurements, one can make the mean square error arbitrarily small. At n> 10, a noticeable change in the value is achieved only with a very significant number of measurements; therefore, a further increase in the number of measurements is inexpedient. In addition, it is impossible to completely eliminate systematic errors, and with a smaller systematic error, a further increase in the number of experiments also does not make sense.

Thus, the problem of finding an approximate value of a physical quantity and its error has been solved. Now it is necessary to determine the reliability of the found real value. The reliability of measurements is understood as the probability that the true value falls within a given confidence interval. Interval (– e,+ e) in which the true value is located with a given probability X 0 , called confidence interval. Let us assume that the probability of difference in the measurement result X from true value X 0 by a value greater than e is equal to 1 - a, i.e.

p(–e<X 0 <+ e) = 1 – a. (2.10)

In the theory of errors, e is usually understood as the quantity . So

p (– <X 0 <+ ) = Ф(t), (2.11)

where F( t) is the probability integral (or Laplace function), as well as the normal distribution function:

, (2.12) where .

Thus, in order to characterize the true value, it is required to know both the error and the reliability. If the confidence interval increases, then the reliability increases that the true value X 0 falls within this interval. A high degree of reliability is essential for critical measurements. This means that in this case it is necessary to choose a large confidence interval or carry out measurements with greater accuracy (i.e., reduce the value of ), which can be done, for example, by repeating the measurements many times.

Under confidence level is understood as the probability that the true value of the measured quantity falls within a given confidence interval. The confidence interval characterizes the measurement accuracy of a given sample, and the confidence level characterizes the measurement reliability.

In the vast majority of experimental problems, the confidence level is 0.90.95 and higher reliability is not required. So at t= 1 according to formulas (2.10 –2.12) 1 – a= F( t) = 0.683, i.e., more than 68% of the measurements are in the interval (–,+). At t= 2 1 – a= 0.955, and at t= 3 parameter 1 – a= 0.997. The latter means that almost all measured values ​​are in the interval (–,+). It can be seen from this example that the interval does contain most of the measured values, i.e. the parameter a can serve as a good indicator of the measurement accuracy.

Until now, it has been assumed that the number of dimensions, although finite, is sufficiently large. In reality, however, the number of measurements is almost always small. Moreover, both in technology and in scientific research, the results of two or three measurements are often used. In this situation, the quantities and at best can determine only the order of magnitude of the variance. There is a correct method for determining the probability of finding the desired value in a given confidence interval, based on the use of Student's distribution (proposed in 1908 by the English mathematician V.S. Gosset). Denote by the interval by which the arithmetic mean value can deviate from the true value X 0 , i.e. D x = X 0 –. In other words, we want to determine the value

.

where S n is determined by formula (2.8). This value obeys the Student's distribution. The Student distribution is characteristic in that it does not depend on the parameters X 0 and s of a normal general population and allows for a small number of measurements ( n < 20) оценить погрешность Dx = ­­– X i by a given confidence probability aor by a given value D x find the reliability of measurements. This distribution depends only on the variable t a and the number of degrees of freedom l = n – 1.


Student's distribution is valid for n 2 and symmetrical with respect to t a = 0 (see Fig. 3). With an increase in the number of measurements t a -distribution tends to a normal distribution (in fact, when n > 20).

The confidence level for a given error of the measurement result is obtained from the expression

p (–<X 0 <+) = 1 – a. (2.14)

At the same time, the value t a is similar to the coefficient t in formula (2.11). the value t a is called Student's coefficient, its values ​​are given in the reference tables. Using relations (2.14) and reference data, one can also solve the inverse problem: for a given reliability a, determine the permissible error of the measurement result.

The Student's distribution also makes it possible to establish that, with a probability arbitrarily close to certainty, for a sufficiently large n the arithmetic mean will differ as little as possible from the true value X 0 .

It was assumed that the distribution law of the random error is known. However, often when solving practical problems, it is not necessary to know the distribution law, it is enough just to study some numerical characteristics of a random variable, for example, the mean value and variance. At the same time, the calculation of the variance makes it possible to estimate the confidence probability even in the case when the error distribution law is unknown or differs from the normal one.

If only one measurement is carried out, the accuracy of the measurement of a physical quantity (if it is carried out carefully) is characterized by the accuracy of the measuring device.

3. ERRORS OF INDIRECT MEASUREMENTS

Often, when conducting an experiment, there is a situation where the desired values and (X i) cannot be directly determined, but it is possible to measure the quantities X i .

For example, to measure density r, one most often measures the mass m and volume V, and the density value is calculated by the formula r= m /V .

Quantities X i contain, as usual, random errors, i.e., they observe quantities x i " = x i D x i. As before, we assume that x i distributed according to the normal law.

1. Let and = f (X) is a function of one variable. In this case, the absolute error

. (3.1)

Relative error of the result of indirect measurements

. (3.2)

2. Let and = f (X , at) is a function of two variables. Then the absolute error

, (3.3)

and the relative error will be

. (3.4)

3. Let and = f (X , at , z, …) is a function of several variables. Then the absolute error by analogy

(3.5)

and relative error

where , and are determined according to formula (2.9).

Table 2 provides formulas for determining indirect measurement errors for some commonly used formulas.

table 2

Function u Absolute error D u Relative error d u
ex
ln x
sin x
cos x
tg x
ctg x
x y
xy
x /y

4. CHECKING THE NORMAL DISTRIBUTION

All the above confidence estimates of both mean values ​​and variances are based on the hypothesis of normality of the law of distribution of random measurement errors and therefore can be applied only as long as the experimental results do not contradict this hypothesis.

If the results of the experiment raise doubts about the normality of the distribution law, then to resolve the issue of the suitability or unsuitability of the normal distribution law, it is necessary to make a sufficiently large number of measurements and apply one of the methods described below.

Mean absolute deviation (MAD) check. The technique can be used for not very large samples ( n < 120). Для этого вычисляется САО по формуле:

. (4.1)

For a sample that has an approximately normal distribution law, the expression must be true

. (4.2)

If this inequality (4.2) is satisfied, then the hypothesis of normal distribution is confirmed.

Compliance check c 2 ("chi-square") or Pearson's goodness-of-fit test. The criterion is based on a comparison of empirical frequencies with theoretical ones, which can be expected when accepting the hypothesis of normal distribution. The measurement results after the elimination of gross and systematic errors are grouped into intervals in such a way that these intervals cover the entire axis and that the amount of data in each interval is large enough (at least five). For each interval ( x i –1 ,x i) count the number t i measurement results that fall within this interval. Then the probability of falling into this interval is calculated under the normal law of the probability distribution R i :

, (4.3)

, (4.4)

where l is the number of all intervals, n is the number of all measurement results ( n = t 1 +t 2 +…+tl).

If the amount calculated by this formula (4.4) turns out to be more than the critical tabular value c 2, determined at a certain confidence level R and the number of degrees of freedom k = l– 3, then with reliability R we can assume that the distribution of the probabilities of random errors in the considered series of measurements differs from the normal one. Otherwise, there are no sufficient grounds for such a conclusion.

Checking by indicators of asymmetry and kurtosis. This method gives an approximate estimate. Asymmetry indicators BUT and excess E are determined by the following formulas:

, (4.5)

. (4.6)

If the distribution is normal, then both of these indicators should be small. The smallness of these characteristics is usually judged in comparison with their root mean square errors. Comparison coefficients are calculated accordingly:

, (4.7)

. (4.8)

5. METHODS FOR EXCLUSION OF BAD ERRORS

When a measurement result is obtained that differs sharply from all other results, there is a suspicion that a gross error has been made. In this case, you must immediately check whether the basic conditions of measurement are not violated. If such a check was not made in time, then the question of the expediency of rejecting sharply different values ​​is decided by comparing it with the rest of the measurement results. In this case, different criteria are applied, depending on whether the root mean square error s is known or not. i measurements (it is assumed that all measurements are made with the same accuracy and independently of each other).

Exclusion method with known s i . First, the coefficient is determined t according to the formula

, (5.1)

where x* – outlier value (estimated error). The value is determined by formula (2.1) without taking into account the expected error x *.

Further, the significance level a is set, at which errors are excluded, the probability of which is less than the value a. Usually one of three significance levels is used: 5% level (errors are excluded, the probability of which is less than 0.05); 1% level (respectively less than 0.01) and 0.1% level (respectively less than 0.001).

At the chosen significance level a, the distinguished value x* consider it a gross error and exclude it from further processing of measurement results, if for the corresponding coefficient t calculated by formula (5.1), the following condition is satisfied: 1 – Ф( t) < a.

Exclusion method for unknown s i .

If the root mean square error of a single measurement s i is not known in advance, then it is estimated approximately from the measurement results using formula (2.8). Next, the same algorithm is applied as for the known s i with the only difference that in formula (5.1) instead of s i value is used S n calculated by formula (2.8).

Three sigma rule.

Since the choice of the reliability of the confidence estimate allows some arbitrariness, in the process of processing the results of the experiment, the rule of three sigmas has become widespread: the deviation of the true value of the measured value does not exceed the arithmetic mean of the measurement results does not exceed the triple root mean square error of this value.

Thus, the three-sigma rule is a confidence estimate in the case of a known value s

or confidence estimate

in the case of an unknown value of s.

The first of these estimates has a reliability of 2Ф(3) = 0.9973 regardless of the number of measurements.

The reliability of the second estimate depends significantly on the number of measurements n .

Reliability dependency R on the number of measurements n for estimating a gross error in the case of an unknown value s is indicated in

Table 4

n 5 6 7 8 9 10 14 20 30 50 150
p(x) 0.960 0.970 0.976 0.980 0.983 0.985 0.990 0.993 0.995 0.996 0.997 0.9973

6. PRESENTATION OF MEASUREMENT RESULTS

Measurement results can be presented in the form of graphs and tables. The last way is the simplest. In some cases, the results of studies can only be presented in the form of a table. But the table does not give a visual representation of the dependence of one physical quantity on another, so in many cases a graph is built. It can be used to quickly find the dependence of one quantity on another, i.e., according to the measured data, an analytical formula is found that relates the quantities X and at. Such formulas are called empirical. Function Finding Accuracy at (X) according to the schedule is determined by the correctness of plotting. Consequently, when high accuracy is not required, graphs are more convenient than tables: they take up less space, it is faster to carry out readings on them, and when plotting them, outliers in the course of a function due to random measurement errors are smoothed out. If particularly high accuracy is required, it is preferable to present the results of the experiment in the form of tables, and find intermediate values ​​using interpolation formulas.

Mathematical processing of measurement results by the experimenter does not set the task of revealing the true nature of the functional relationship between variables, but only makes it possible to describe the results of the experiment with the simplest formula, which makes it possible to use interpolation and apply methods of mathematical analysis to the observed data.

Graphic method. Most often, a rectangular coordinate system is used to plot graphs. To facilitate the construction, you can use graph paper. In this case, distance readings on the graphs should be done only by divisions on paper, and not using a ruler, since the length of the divisions can be different vertically and horizontally. Beforehand, it is necessary to select reasonable scales along the axes so that the measurement accuracy corresponds to the reading accuracy according to the graph and the graph is not stretched or compressed along one of the axes, as this leads to an increase in the reading error.

Next, points representing the measurement results are plotted on the graph. To highlight different results, they are applied with different icons: circles, triangles, crosses, etc. Since in most cases the errors in the values ​​of the function are greater than the errors in the argument, only the error of the function is applied in the form of a segment with a length equal to twice the error on a given scale. In this case, the experimental point is located in the middle of this segment, which is limited by dashes at both ends. After that, a smooth curve is drawn so that it passes as close as possible to all experimental points and approximately the same number of points is on both sides of the curve. The curve should (as a rule) lie within the measurement errors. The smaller these errors, the better the curve coincides with the experimental points. It is important to note that it is better to draw a smooth curve outside the margin of error than to have a break in the curve near a single point. If one or more points lie far from the curve, then this often indicates a gross error in the calculation or measurement. Curves on the graphs are most often built using patterns.

You should not take too many points when constructing a graph of a smooth dependence, and only for curves with maxima and minima, it is necessary to plot points more often in the extremum region.

When plotting graphs, a technique called the alignment method or the stretched thread method is often used. It is based on the geometric selection of a straight line "by eye".

If this technique fails, then in many cases the transformation of the curve into a straight line is achieved by using one of the functional scales or grids. Most often, logarithmic or semi-logarithmic grids are used. This technique is also useful in cases where you need to stretch or compress any part of the curve. Thus, it is convenient to use the logarithmic scale to display the quantity under study, which varies by several orders of magnitude within the limits of measurements. This method is recommended for finding approximate values ​​of coefficients in empirical formulas or for measurements with low data accuracy. A straight line, when using a logarithmic grid, represents a dependence of type , and when using a semi-logarithmic grid, a dependence of type . Coefficient AT 0 can be zero in some cases. However, when using a linear scale, all values ​​on the graph are measured with the same absolute accuracy, and when using a logarithmic scale, with the same relative accuracy.

It should also be noted that it is often difficult to judge from the available limited section of the curve (especially if not all points lie on the curve) what type of function should be used for the approximation. Therefore, the experimental points are transferred to one or another coordinate grid and only then they look at which of them the data obtained most closely coincide with the straight line, and in accordance with this, an empirical formula is chosen.

Selection of empirical formulas. Although there is no general method that would make it possible to select the best empirical formula for any measurement results, it is still possible to find an empirical relationship that most accurately reflects the desired relationship. Full agreement between the experimental data and the desired formula should not be achieved, since the interpolation polynomial or other approximating formula will repeat all the measurement errors, and the coefficients will not have a physical meaning. Therefore, if the theoretical dependence is not known, then choose a formula that better matches the measured values ​​and contains fewer parameters. To determine the appropriate formula, the experimental data are plotted graphically and compared with various curves that are plotted according to known formulas on the same scale. By changing the parameters in the formula, you can change the shape of the curve to a certain extent. In the process of comparison, it is necessary to take into account the existing extrema, the behavior of the function for different values ​​of the argument, the convexity or concavity of the curve in different sections. Having chosen the formula, the values ​​of the parameters are determined so that the difference between the curve and the experimental data is no more than the measurement errors.

In practice, linear, exponential and power dependences are most often used.

7. SOME PROBLEMS OF THE ANALYSIS OF EXPERIMENTAL DATA

Interpolation. Under interpolation they understand, firstly, finding function values ​​for intermediate values ​​of the argument that are not in the table and, secondly, replacing the function with an interpolating polynomial if its analytical expression is unknown, and the function must be subjected to certain mathematical operations. The simplest interpolation methods are linear and graphical. Linear interpolation can be used when the dependence at (X) is expressed by a straight line or a curve close to a straight line, for which such interpolation does not lead to gross errors. In some cases, it is possible to carry out linear interpolation even with a complex dependence at (X) if it is carried out within the limits of such a small change in the argument that the dependence between the variables can be considered linear without noticeable errors. In graphical interpolation, an unknown function at (X) replace it with an approximate graphical representation (according to experimental points or tabular data), from which the values ​​are determined at for any X within measurements. However, accurate graphical construction of complex curves is sometimes very difficult, such as a curve with sharp extrema, so graphical interpolation is of limited use.

Thus, in many cases it is not possible to apply either linear or graphical interpolation. In this regard, interpolating functions were found that allow one to calculate the values at with sufficient accuracy for any functional dependence at (X) provided that it is continuous. The interpolating function has the form

where B 0 ,B 1 , … B n are determined coefficients. Since the given polynomial (7.1) is represented by a curve of parabolic type, such an interpolation is called parabolic.

The coefficients of the interpolating polynomial are found by solving the system from ( l+ 1) linear equations obtained by substituting known values ​​into equation (7.1) at i and X i .

Interpolation is most simply performed when the intervals between the values ​​of the argument are constant, i.e.

where h is a constant value called step. In general

When using interpolation formulas, one has to deal with differences in values at and the differences of these differences, i.e., the differences of the function at (X) of different orders. Differences of any order are calculated by the formula

. (7.4)

For example,

When calculating the differences, it is convenient to arrange them in the form of a table (see Table 4), in each column of which the differences are recorded between the corresponding values ​​of the minuend and the subtrahend, i.e., a diagonal table is compiled. Differences are usually recorded in units of the last digit.

Table 4

Function Differences at (X)

x y Dy D2y D3y D4y
x0 at 0
x 1 1
x2 at 2 D 4 y 0
x 3 3
x 4 at 4

Since the function at (X) is expressed by the polynomial (7.1) n-th degree relative to X, then the differences are also polynomials, the degrees of which decrease by one when passing to the next difference. N-i difference of polynomial n-th degree is a constant number, i.e. contains X to the zero degree. All higher-order differences are zero. This determines the degree of the interpolating polynomial.

By transforming the function (7.1), we can obtain Newton's first interpolation formula:

It is used to find values at for any X within measurements. Let us represent this formula (7.5) in a slightly different form:

The last two formulas are sometimes called Newton's interpolation formulas for forward interpolation. These formulas include differences going diagonally downwards, and it is convenient to use them at the beginning of the table of experimental data, where there are enough differences.

Newton's second interpolation formula, derived from the same equation (7.1), is as follows:

This formula (7.7) is usually called Newton's interpolation formula for backward interpolation. It is used to determine the values at at the end of the table.

Now consider interpolation for unequally spaced values ​​of the argument.

Let still function at (X) is given by a number of values x i and i, but the intervals between successive values x i are not the same. Newton's formulas above cannot be used because they contain a constant step h. In problems of this kind, it is necessary to calculate the reduced differences:

; etc. (7.8)

Differences of higher orders are calculated similarly. As for the case of equidistant argument values, if f (X) is a polynomial n-th degree, then the difference n th order are constant, and higher order differences are equal to zero. In simple cases, the tables of reduced differences have a form similar to the tables of differences for equidistant values ​​of the argument.

In addition to the considered Newton interpolation formulas, the Lagrange interpolation formula is often used:

In this formula, each of the terms is a polynomial n th degree and they are all equal. Therefore, until the end of the calculations, one cannot neglect any of them.

reverse interpolation. In practice, it is sometimes necessary to find an argument value that corresponds to a certain function value. In this case, the inverse function is interpolated and it should be borne in mind that the differences of the function are not constant and interpolation must be carried out for unequally spaced values ​​of the argument, i.e., use formula (7.8) or (7.9).

Extrapolation. Extrapolation is called the calculation of function values at outside the range of the argument X in which the measurements were taken. With an unknown analytical expression of the desired function, extrapolation must be carried out very carefully, since the behavior of the function is not known at (X) outside the measurement interval. Extrapolation is allowed if the course of the curve is smooth and there is no reason to expect sharp changes in the process under study. However, extrapolation should be carried out within narrow limits, for example, within a step h. At more distant points, you can get incorrect values at. For extrapolation, the same formulas apply as for interpolation. So, Newton's first formula is used when extrapolating backwards, and Newton's second formula is used when extrapolating forward. The Lagrange formula applies in both cases. It should also be borne in mind that extrapolation leads to larger errors than interpolation.

Numerical integration.

Trapezoidal formula. The trapezoid formula is usually used if the function values ​​are measured for equidistant values ​​of the argument, i.e. with a constant step. According to the trapezoidal rule, as an approximate value of the integral

take the value

, (7.11)

Rice. 7.1. Comparison of Numerical Integration Methods

i.e. believe . The geometric interpretation of the trapezoid formula (see Fig. 7.1) is as follows: the area of ​​a curvilinear trapezoid is replaced by the sum of the areas of rectilinear trapezoids. The total error in calculating the integral using the trapezoid formula is estimated as the sum of two errors: the truncation error caused by the replacement of a curvilinear trapezoid by rectilinear ones, and the rounding error caused by errors in measuring the values ​​of the function. The truncation error for the trapezoid formula is

, where . (7.12)

Rectangle formulas. Rectangle formulas, like the trapezoid formula, are also used in the case of equidistant values ​​of the argument. The approximate integral sum is determined by one of the formulas

The geometric interpretation of the rectangle formulas is given in fig. 7.1. The error of formulas (7.13) and (7.14) is estimated by the inequality

, where . (7.15)

Simpson formula. The integral is approximately determined by the formula

where n- even number. The error of Simpson's formula is estimated by the inequality

, where . (7.17)

Simpson's formula leads to exact results for the case when the integrand is a polynomial of the second or third degree.

Numerical integration of differential equations. Consider the first order ordinary differential equation at " = f (X , at) with initial condition at = at 0 at X = X 0 . It is required to find an approximate solution at = at (X) on the segment [ X 0 , X k ].

Rice. 7.2. Geometric interpretation of Euler's method

To do this, this segment is divided into n equal parts length ( X kX 0)/n. Search for approximate values at 1 , at 2 , … , at n functions at (X) at division points X 1 , X 2 , … , X n = X k carried out by various methods.

Euler's broken line method. For a given value at 0 = at (X 0) other values at i at (X i) are sequentially calculated by the formula

, (7.18)

where i = 0, 1, …, n – 1.

Graphically, the Euler method is presented in fig. 7.1, where the graph of the solution of the equation at = at (X) is approximately a broken line (hence the name of the method). Runge-Kutta method. Provides higher accuracy than the Euler method. Required values at i are sequentially calculated by the formula

, (7.19), where,

, , .

REVIEW OF SCIENTIFIC LITERATURE

A literature review is an essential part of any research report. The review should fully and systematically state the state of the issue, allow an objective assessment of the scientific and technical level of work, correctly choose the ways and means to achieve the goal, and evaluate both the effectiveness of these means and the work as a whole. The subject of analysis in the review should be new ideas and problems, possible approaches to solving these problems, the results of previous studies, economic data, and possible ways to solve problems. Contradictory information contained in various literary sources should be analyzed and evaluated with particular care.

From the analysis of the literature, it should be clear that in this narrow issue it is known quite reliably, which is doubtful, debatable; what are the priority, key tasks in the set technical problem; where and how to look for their solutions.

The time spent on the review is added up like this:

Research always has a narrow, specific goal. In the conclusion of the review, the choice of purpose and method is substantiated. The review should prepare this decision. From this follows his plan and selection of material. The review considers only such narrow issues that can directly affect the solution of the problem, but so completely that it covers almost all modern literature on this issue.

ORGANIZATION OF REFERENCE AND INFORMATION ACTIVITIES

In our country, information activity is based on the principle of centralized processing of scientific documents, which makes it possible to achieve full coverage of information sources at the lowest cost, to summarize and systematize them in the most qualified way. As a result of such processing, various forms of information publications are prepared. These include:

1) abstract journals(RJ) is the main information publication containing mainly abstracts (sometimes annotations and bibliographic descriptions) of sources of greatest interest to science and practice. Abstract journals, announcing the emerging scientific and technical literature, make it possible to carry out a retrospective search, overcome language barriers, and make it possible to follow the achievements in related fields of science and technology;

2) signal information bulletins(SI), which include bibliographic descriptions of literature published in a particular field of knowledge and are essentially bibliographic indexes. Their main task is to promptly inform about all the novelties of scientific and technical literature, since this information appears much earlier than in abstract journals;

3) express information– information publications containing extended abstracts of articles, descriptions of inventions and other publications and allowing not to refer to the original source. The task of express information is a quick and fairly complete familiarization of specialists with the latest achievements of science and technology;

4) analytical reviews- information publications that give an idea of ​​the state and development trends of a certain area (section, problem) of science and technology;

5) abstract reviews- pursuing the same goal as the analytical reviews, and at the same time having a more descriptive character. The authors of abstract reviews do not give their own assessment of the information contained in them;

6) printed bibliographic cards, i.e., a complete bibliographic description of the source of information. They are among the signal publications and perform the functions of alerting about new publications and the possibility of creating catalogs and file cabinets necessary for every specialist, researcher;

7) annotated printed bibliographic cards ;

8) bibliographic indexes .

Most of these publications are also distributed by individual subscription. Detailed information about them can be found in the "Catalogues of publications of scientific and technical information bodies" published annually.

General concept.

The branch of science that studies measurements is metrology.

Metrologythe science of measurements, methods and means of ensuring their unity and ways to achieve the required accuracy.

In metrology, they decide the following main tasks : development of a general theory of measurements of units of physical quantities and their systems, development of methods and measuring instruments, methods for determining the accuracy of measurements, the foundations for ensuring the unity and uniformity of measuring instruments, standards and exemplary measuring instruments, methods for transferring unit sizes from standards and exemplary measuring instruments to working instruments measurements.

Physical quantities. International system of units of physical quantities Si.

Physical quantity- this is a characteristic of one of the properties of a physical object (phenomenon or process), which is qualitatively common to many physical objects, but quantitatively individual for each object.

The value of a physical quantity- this is an assessment of its value in the form of a certain number of units accepted for it or a number according to the scale adopted for it. For example, 120 mm is the value of a linear quantity; 75 kg - body weight value, HB190 - Brinell hardness number.

Measurement of a physical quantity call a set of operations performed with the help of a technical means that stores a unit or reproduces the scale of a physical quantity, which consists in comparing (explicitly or implicitly) the measured quantity with its unit or scale in order to obtain the value of this quantity in the most convenient form for use.

In measurement theory, it is generally accepted five types of scales : names, order, intervals, relations and absolute.

Can be distinguished three types of physical quantities , which are measured according to different rules.

The first type of physical quantities includes quantities on the set of dimensions of which only the order and equivalence relations are defined. These are relations of the type "softer", "harder", "warmer", "colder", etc. Quantities of this kind include, for example, hardness, defined as the ability of a body to resist the penetration of another body into it; temperature as the degree of heating of the body, etc. The existence of such relationships is established theoretically or experimentally with the help of special means of comparison, as well as on the basis of observations of the results of the impact of a physical quantity on any objects.

For the second type of physical quantities, the relation of order and equivalence takes place both between dimensions and between dimensions in pairs of their dimensions. Gak. Differences of time intervals are considered equal if the distances between the corresponding marks are equal.

The third type is made up of additive physical quantities. Additive physical quantities are quantities on the set of sizes of which not only the order and equivalence relations are defined, but also the operations of addition and subtraction. Such quantities include length, mass, current strength, etc. They can be measured in parts, and also reproduced using a multi-valued measure based on the summation of individual measures. For example, the sum of the masses of two bodies is the mass of such a body that balances the first two on equal-arm scales.

System of physical quantities- this is a set of interrelated physical quantities, formed in accordance with accepted principles, when some quantities are taken as independent, while others are functions of independent quantities. The system of physical quantities contains basic physical quantities conventionally accepted as independent of other quantities of this system, and derived physical quantities determined through the basic quantities of this system.

Additive physical quantities quantities are called, on the set of sizes of which not only the relations of order and equivalence are defined, but also the operations of addition and subtraction. Such quantities include length, mass, current strength, etc. They can be measured in parts, and also reproduced using a multi-valued measure based on the summation of individual measures. For example, the sum of the masses of two bodies is the mass of such a body that balances the first two on equal-arm scales.

Basic physical quantity is a physical quantity included in the system of units and conditionally accepted as independent of other quantities of this system.

The derived unit of the system of units is a unit of a derivative of a physical quantity of a system of units, formed in accordance with an equation relating it to basic units.

The derived unit is called coherent, if in this equation the numerical coefficient is taken equal to one. Accordingly, the system of units, consisting of basic units and coherent derivatives, is called the coherent system of units of physical quantities.

Absolute scales have all the features of ratio scales, but additionally they have a natural unambiguous definition of the unit of measurement. Such scales correspond to relative quantities (the ratios of physical quantities of the same name described by ratio scales). Among the absolute scales, absolute scales are distinguished, the values ​​of which are in the range from 0 to 1. Such a value is, for example, the efficiency factor.

Name scales characterized only by an equivalence relation. In its essence, it is of high quality, does not contain zero and unit of measurement. An example of such a scale is the assessment of color by name (color atlases). Since each color has many variations, such a comparison can only be performed by an experienced expert with the appropriate visual capabilities.

order scales are characterized by the relation of equivalence and order. For the practical use of such a scale, it is necessary to establish a number of standards. Classification of objects is carried out by comparing the intensity of the evaluated property with its reference value. Order scales include, for example, the scale of earthquakes, the scale of wind strength, the scale of the hardness of bodies, etc.

difference scale differs from the scale of order in that, in addition to equivalence and order relations, the equivalence of intervals (differences) between various quantitative manifestations of a property is added. It has conditional zero values, and the intervals are set by agreement. A typical example of such a scale is the time interval scale. Time intervals can be summed up (subtracted).

Relationship scales describe properties to which equivalence, order, and summation relations, and hence subtraction and multiplication, apply. These scales have a natural zero value, and the units of measurement are established by agreement. For the ratio scale, one standard is enough to distribute all the objects under study according to the intensity of the property being measured. An example of a ratio scale is the mass scale. The mass of two objects is equal to the sum of the masses of each of them.

Unit of physical quantity- a physical quantity of a fixed size, which is conditionally assigned a value equal to one, and used to quantify homogeneous physical quantities. The number of independently established quantities is equal to the difference between the number of quantities included in the system and the number of independent equations of connection between the quantities. For example, if the speed of a body is determined by the formula υ =l/t, then only two quantities can be established independently, and the third can be expressed in terms of them.

Dimension of a physical quantity- an expression in the form of a power monomial, composed of products of symbols of basic physical quantities in various degrees and reflecting the relationship of a given quantity with physical quantities accepted in this system of quantities as the main ones, and with a proportionality coefficient equal to one.

The degrees of symbols of the basic quantities included in the monomial can be integer, fractional, positive and negative.

The dimension of quantities is denoted by the sign dim. In system LMT dimension of quantities X will:

where L, M, T - symbols of quantities taken as basic (respectively, length, mass, time); l, m, t- integer or fractional, positive or negative real numbers, which are indicators of dimension.

The dimension of a physical quantity is a more general characteristic than the equation that determines the quantity, since the same dimension can be inherent in quantities that have a different qualitative aspect.

For example, the work of a force A is determined by the equation A = FL; the kinetic energy of a moving body - by the equation E k \u003d mυ 2 / 2, and the dimensions of the first and second are the same.

Various operations can be performed on dimensions: multiplication, division, exponentiation and root extraction.

Basic SI units

Dimension indicator of a physical quantity - exponent of the degree to which the dimension of the basic physical quantity, which is included in the dimension of the derivative physical quantity, is raised. Dimensions are widely used in the formation of derived units and checking the homogeneity of equations. If the weight exponents of the dimension are equal to zero, then such a physical quantity is called dimensionless. All relative quantities (the ratio of the same names) are dimensionless. Taking into account the need to cover all areas of science and technology with the International System of Units, the set of units is chosen as the main one in it. In mechanics, these are units of length, mass and time; in electricity, a unit of electric current strength is added; in heat, a unit of thermodynamic temperature; in optics, a unit of light intensity; in molecular physics, thermodynamics and chemistry, a unit of the amount of matter. These seven units are respectively: meter, kilogram, second, ampere. Kelvin, candela and mole - and are chosen as the basic SI units.

An important principle observed in the International System of Units is its coherence(consistency). Thus, the choice of the basic units of the system ensured the complete consistency of mechanical and electrical units. For example, watt- a unit of mechanical power (equal to a joule per second) is equal to the power released by an electric current of 1 ampere at a voltage of 1 volt. For example, the unit of speed is formed using an equation that determines the speed of a rectilinearly and uniformly moving point

υ =L/t, where

υ - speed, L is the length of the traveled path, t is the time. Substitution instead υ , L and t and their SI units will give ( υ }={L)/{t) = 1 m/s. Therefore, the SI unit of speed is meters per second. It is equal to the speed of a rectilinearly and uniformly moving point, at which this point in time t = 1s moves a distance L= 1m. For example, to form a unit of energy,

the equation T = Тυ e,where T- kinetic energy; t- body mass; t is the speed of the point, then the coherent SI unit of energy is formed as follows:

SI derived units,


Similar information.


  • 1 General information
  • 2 History
  • 3 SI units
    • 3.1 Basic units
    • 3.2 Derived units
  • 4 Non-SI units
  • Prefixes

General information

The SI system was adopted by the XI General Conference on Weights and Measures, some subsequent conferences made a number of changes to the SI.

The SI system defines seven major and derivatives units of measure, as well as a set of . Standard abbreviations for units of measurement and rules for writing derived units have been established.

In Russia, there is GOST 8.417-2002, which prescribes the mandatory use of SI. It lists the units of measurement, gives their Russian and international names, and establishes the rules for their use. According to these rules, only international designations are allowed to be used in international documents and on instrument scales. In internal documents and publications, either international or Russian designations can be used (but not both at the same time).

Basic units: kilogram, meter, second, ampere, kelvin, mole and candela. Within the SI, these units are considered to have independent dimensions, i.e., none of the base units can be derived from the others.

Derived units are obtained from the basic ones using algebraic operations such as multiplication and division. Some of the derived units in the SI System have their own names.

Prefixes can be used before unit names; they mean that the unit of measurement must be multiplied or divided by a certain integer, a power of 10. For example, the prefix "kilo" means multiplying by 1000 (kilometer = 1000 meters). SI prefixes are also called decimal prefixes.

Story

The SI system is based on the metric system of measures, which was created by French scientists and was first widely introduced after the French Revolution. Before the introduction of the metric system, units of measurement were chosen randomly and independently of each other. Therefore, the conversion from one unit of measure to another was difficult. In addition, different units of measurement were used in different places, sometimes with the same names. The metric system was supposed to become a convenient and unified system of measures and weights.

In 1799, two standards were approved - for the unit of length (meter) and for the unit of weight (kilogram).

In 1874, the CGS system was introduced, based on three units of measurement - centimeter, gram and second. Decimal prefixes from micro to mega were also introduced.

In 1889, the 1st General Conference on Weights and Measures adopted a system of measures similar to the GHS, but based on the meter, kilogram and second, since these units were recognized as more convenient for practical use.

Subsequently, basic units were introduced for measuring physical quantities in the field of electricity and optics.

In 1960, the XI General Conference on Weights and Measures adopted the standard, which for the first time was called the "International System of Units (SI)".

In 1971, the IV General Conference on Weights and Measures amended the SI, adding, in particular, the unit for measuring the amount of a substance (mol).

The SI is now accepted as the legal system of units by most countries in the world and is almost always used in science (even in countries that have not adopted the SI).

SI units

After the designations of units of the SI System and their derivatives, a period is not put, in contrast to the usual abbreviations.

Basic units

Value unit of measurement Designation
Russian name international name Russian international
Length meter meter (meter) m m
Weight kilogram kg kg kg
Time second second with s
The strength of the electric current ampere ampere BUT A
Thermodynamic temperature kelvin kelvin To K
The power of light candela candela cd cd
Amount of substance mole mole mole mol

Derived units

Derived units can be expressed in terms of base units using the mathematical operations of multiplication and division. Some of the derived units, for convenience, have been given their own names, such units can also be used in mathematical expressions to form other derived units.

The mathematical expression for a derived unit of measure follows from the physical law by which this unit of measure is determined or the definition of the physical quantity for which it is introduced. For example, speed is the distance a body travels per unit time. Accordingly, the unit of speed is m/s (meter per second).

Often the same unit of measurement can be written in different ways, using a different set of basic and derived units (see, for example, the last column in the table ). However, in practice, established (or simply generally accepted) expressions are used that best reflect the physical meaning of the measured quantity. For example, to write the value of the moment of force, N×m should be used, and m×N or J should not be used.

Derived units with their own names
Value unit of measurement Designation Expression
Russian name international name Russian international
flat corner radian radian glad rad m×m -1 = 1
Solid angle steradian steradian Wed sr m 2 × m -2 = 1
Celsius temperature degree Celsius °C degree Celsius °C K
Frequency hertz hertz Hz Hz from -1
Force newton newton H N kg×m/s 2
Energy joule joule J J N × m \u003d kg × m 2 / s 2
Power watt watt Tue W J / s \u003d kg × m 2 / s 3
Pressure pascal pascal Pa Pa N / m 2 \u003d kg? M -1? s 2
Light flow lumen lumen lm lm cd×sr
illumination luxury lux OK lx lm / m 2 \u003d cd × sr × m -2
Electric charge pendant coulomb Cl C A×s
Potential difference volt voltage AT V J / C \u003d kg × m 2 × s -3 × A -1
Resistance ohm ohm Ohm Ω B / A \u003d kg × m 2 × s -3 × A -2
Capacity farad farad F F Kl / V \u003d kg -1 × m -2 × s 4 × A 2
magnetic flux weber weber wb wb kg × m 2 × s -2 × A -1
Magnetic induction tesla tesla Tl T Wb / m 2 \u003d kg × s -2 × A -1
Inductance Henry Henry gn H kg × m 2 × s -2 × A -2
electrical conductivity Siemens siemens Cm S Ohm -1 \u003d kg -1 × m -2 × s 3 A 2
Radioactivity becquerel becquerel Bq bq from -1
Absorbed dose of ionizing radiation Gray gray Gr Gy J / kg \u003d m 2 / s 2
Effective dose of ionizing radiation sievert sievert Sv Sv J / kg \u003d m 2 / s 2
Catalyst activity rolled catal cat kat mol×s -1

Non-SI units

Some non-SI units of measurement are "accepted for use in conjunction with the SI" by the decision of the General Conference on Weights and Measures.

unit of measurement international name Designation SI value
Russian international
minute minutes min min 60 s
hour hours h h 60 min = 3600 s
day day day d 24 h = 86 400 s
degree degree ° ° (P/180) glad
minute of arc minutes (1/60)° = (P/10 800)
arc second second (1/60)′ = (P/648,000)
liter liter (liter) l l, L 1 dm 3
ton tons t t 1000 kg
neper neper Np Np
white Bel B B
electron-volt electronvolt eV eV 10 -19 J
atomic mass unit unified atomic mass unit a. eat. u =1.49597870691 -27 kg
astronomical unit astronomical unit a. e. ua 10 11 m
nautical mile nautical miles mile 1852 m (exactly)
node knot bonds 1 nautical mile per hour = (1852/3600) m/s
ar are a a 10 2 m 2
hectare hectare ha ha 10 4 m 2
bar bar bar bar 10 5 Pa
angstrom angström Å Å 10 -10 m
barn barn b b 10 -28 m 2