Biographies Characteristics Analysis

4 elements of the linear theory of automatic control. TAU for the smallest: an example of the implementation of the PID controller in Unity3D


When the question of the implementation of PID controllers is somewhat deeper than it seems. So much so that the young self-made ones who decide to implement such a regulation scheme are waiting for a lot of wonderful discoveries, and the topic is relevant. So I hope this opus is useful to someone, so let's get started.

Try number one

As an example, let's try to implement a control scheme using the example of turn control in a simple 2D space arcade game, step by step, starting from the very beginning (don't forget that this is a tutorial?).


Why not 3D? Because the implementation doesn't change, except that you have to tweak the PID controller to control pitch, yaw, and roll. Although the question of the correct application of PID control along with quaternions is really interesting, maybe in the future I will consecrate it, but even NASA prefers Euler angles instead of quaternions, so we'll get by with a simple model on a two-dimensional plane.


To begin with, let's create the spaceship game object itself, which will consist of the ship object itself at the top level of the hierarchy, attach a child Engine object to it (purely for the sake of special effects). Here's what it looks like for me:



And on the object of the spacecraft itself we throw in inspector all sorts of components. Looking ahead, I will give a screen of how it will look at the end:



But that's later, but for now there are no scripts in it yet, only a standard gentleman's set: Sprite Render, RigidBody2D, Polygon Collider, Audio Source (why?).


Actually, physics is the most important thing for us now and control will be carried out exclusively through it, otherwise, the use of a PID controller would lose its meaning. Let's also leave the mass of our spacecraft at 1 kg, and all the coefficients of friction and gravity are equal to zero - in space.


Because in addition to the spacecraft itself, there are a bunch of other, less intelligent space objects, then we first describe the parent class base body, which will contain references to our components, initialization and destruction methods, as well as a number of additional fields and methods, for example, to implement celestial mechanics:


BaseBody.cs

using UnityEngine; using System.Collections; using System.Collections.Generic; namespace Assets.Scripts.SpaceShooter.Bodies ( public class BaseBody: MonoBehaviour ( readonly float _deafultTimeDelay = 0.05f; public static List _bodies = new List (); #region RigidBody public Rigidbody2D _rb2d; public Collider2D _c2d; #endregion #region References public Transform _myTransform; public GameObject _myObject; ///

/// Object that appears when destroyed /// public GameObject _explodePrefab; #endregion #region Audio public AudioSource _audioSource; /// /// Sounds played when damaged /// public AudioClip _hitSounds; /// /// Sounds that play when an object appears /// public AudioClip _awakeSounds; /// /// Sounds played before death /// public AudioClip _deadSounds; #endregion #region External Force Variables /// /// External forces acting on the object /// public Vector2 _ExternalForces = new Vector2(); /// /// Current velocity vector /// public Vector2 _V = new Vector2(); /// /// Current gravity force vector /// public Vector2 _G = new Vector2(); #endregion public virtual void Awake() ( Init(); ) public virtual void Start() ( ) public virtual void Init() ( _myTransform = this.transform; _myObject = gameObject; _rb2d = GetComponent (); _c2d = GetComponentsInChildren (); _audioSource = GetComponent (); PlayRandomSound(_awakeSounds); BaseBody bb = GetComponent (); _bodies.Add(bb); ) /// /// Destruction of the character /// public virtual void Destroy() ( _bodies.Remove(this); for (int i = 0; i< _c2d.Length; i++) { _c2d[i].enabled = false; } float _t = PlayRandomSound(_deadSounds); StartCoroutine(WaitAndDestroy(_t)); } /// /// Wait some time before destroying /// /// Waiting time /// public IEnumerator WaitAndDestroy(float waitTime) ( yield return new WaitForSeconds(waitTime); if (_explodePrefab) ( Instantiate(_explodePrefab, transform.position, Quaternion.identity); ) Destroy(gameObject, _deafultTimeDelay); ) /// /// Play a random sound /// /// Array of sounds /// Sound duration public float PlayRandomSound(AudioClip audioClip) ( float _t = 0; if (audioClip.Length > 0) ( int _i = UnityEngine.Random.Range(0, audioClip.Length - 1); AudioClip _audioClip = audioClip[_i]; _t = _audioClip.length;_audioSource.PlayOneShot(_audioClip); ) return _t; ) /// /// Taking damage /// /// Damage level public virtual void Damage(float damage) ( PlayRandomSound(_hitSounds); ) ) )


It seems that they described everything that is needed, even more than necessary (within the framework of this article). Now let's inherit the class of the ship from it ship, which should be able to move and turn:


SpaceShip.cs

using UnityEngine; using System.Collections; using System.Collections.Generic; namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _rotation = 0f; public void FixedUpdate() ( float torque = ControlRotate( _rotation); Vector2 force = ControlForce(_movement); _rb2d.AddTorque(torque); _rb2d.AddRelativeForce(force); ) public float ControlRotate(Vector2 rotate) ( float result = 0f; return result; ) public Vector2 ControlForce(Vector2 movement) ( Vector2 result = new Vector2(); return result; ) ) )


While there is nothing interesting in it, at the moment it is just a stub class.


We will also describe the base (abstract) class for all BaseInputController input controllers:


BaseInputController.cs

using UnityEngine; using Assets.Scripts.SpaceShooter.Bodies; namespace Assets.Scripts.SpaceShooter.InputController ( public enum eSpriteRotation ( Rigth = 0, Up = -90, Left = -180, Down = -270 ) public abstract class BaseInputController: MonoBehaviour ( public GameObject _agentObject; public Ship _agentBody; // Link on the ship logic component public eSpriteRotation _spriteOrientation = eSpriteRotation.Up; //This is due to the non-standard // orientation of the sprite "up" instead of "right" public abstract void ControlRotate(float dt); public abstract void ControlForce(float dt); public virtual void Start() ( _agentObject = gameObject; _agentBody = gameObject.GetComponent (); ) public virtual void FixedUpdate() ( float dt = Time.fixedDeltaTime; ControlRotate(dt); ControlForce(dt); ) public virtual void Update() ( //TO DO ) ) )


And finally, the player controller class PlayerFigtherInput:


PlayerInput.cs

using UnityEngine; using Assets.Scripts.SpaceShooter.Bodies; namespace Assets.Scripts.SpaceShooter.InputController ( public class PlayerFigtherInput: BaseInputController ( public override void ControlRotate(float dt) ( // Determine the position of the mouse relative to the player Vector3 worldPos = Input.mousePosition; worldPos = Camera.main.ScreenToWorldPoint(worldPos); / / Store mouse pointer coordinates float dx = -this.transform.position.x + worldPos.x; float dy = -this.transform.position.y + worldPos.y; //Pass vector2 direction target = new Vector2(dx, dy ); _agentBody._target = target; // Calculate rotation according to keypress float targetAngle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg; _agentBody._targetAngle = targetAngle + (float)_spriteOrientation; ) public override void ControlForce( float dt) ( //Pass movement _agentBody._movement = Input.GetAxis("Vertical") * Vector2.up + Input.GetAxis("Horizontal") * Vector2.right; ) ) )


It seems to be finished, now we can finally move on to what all this was started for, i.e. PID controllers (do not forget, I hope?). Its implementation seems simple to the point of disgrace:


using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Assets.Scripts.Regulator ( // This attribute is required for the regulator fields // to be displayed in the inspector and serialized public class SimplePID ( public float Kp, Ki, Kd; private float lastError; private float P, I, D; public SimplePID() ( Kp = 1f; Ki = 0; Kd = 0.2f; ) public SimplePID(float pFactor, float iFactor, float dFactor) ( this.Kp = pFactor; this.Ki = iFactor; this.Kd = dFactor; ) public float Update(float error, float dt) ( P = error; I += error * dt; D = (error - lastError) / dt; lastError = error; float CO = P * Kp + I * Ki + D * Kd ; return CO; ) ) )

We will take the default values ​​of the coefficients from the ceiling: it will be a trivial unit coefficient of the proportional control law Kp = 1, a small value of the coefficient for the differential control law Kd = 0.2, which should eliminate the expected fluctuations and a zero value for Ki, which is chosen because in our software model, there are no static errors (but you can always introduce them, and then heroically fight with the help of the integrator).


Now let's go back to our SpaceShip class and try to use our creation as the spaceship's rotation controller in the ControlRotate method:


public float ControlRotate(Vector2 rotate) ( float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate the error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); //Get corrective acceleration MV = _angleController.Update (angleError, dt); return MV; )

The PID controller will carry out precise angular positioning of the spacecraft using torque alone. Everything is honest, physics and self-propelled guns, almost like in real life.


And without those Quaternion.Lerp of yours

if (!_rb2d.freezeRotation) rb2d.freezeRotation = true; float deltaAngle = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); float T = dt * Mathf.Abs(_rotationSpeed ​​/ deltaAngle); // Transform the angle into a Quaternion vector rot = Quaternion.Lerp(_myTransform.rotation, Quaternion.Euler(new Vector3(0, 0, targetAngle)), T); // Change the rotation of the object _myTransform. rotation = rot;


The resulting Ship.cs source code is under the spoiler

using UnityEngine; using Assets.Scripts.Regulator; namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public GameObject _flame; public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _targetAngle = 0f; public float _angle = 0f; public SimplePID _angleController = new SimplePID(); public void FixedUpdate() ( float torque = ControlRotate(_targetAngle); Vector2 force = ControlForce(_movement); _rb2d.AddTorque(torque); _rb2d.AddRelativeForce(force); ) public float ControlRotate(float rotate) ( float MV = 0f; float dt = Time.fixedDeltaTime; _angle = _myTransform.eulerAngles.z; //Calculate the error float angleError = Mathf.DeltaAngle(_angle, rotate); //Get corrective acceleration MV = _angleController.Update( angleError, dt); return MV; ) public Vector2 ControlForce(Vector2 movement) ( Vector2 MV = new Vector2(); //Piece of engine running special effect code for the sake of if (movement != Vector2.zero) ( if (_flame != null) ( _flame.SetActive(tru e); ) ) else ( if (_flame != null) ( _flame.SetActive(false); ) ) MV = movement; returnMV; ) ) )


All? Are we going home?



WTF! What's happening? Why is the ship turning in a strange way? And why does it bounce off other objects so sharply? Is this stupid PID controller not working?


No panic! Let's try to figure out what's going on.


At the moment a new value of SP is received, there is a sharp (stepped) jump in the mismatch of the error, which, as we remember, is calculated like this: accordingly, there is a sharp jump in the derivative of the error, which we calculate in this line of code:


D = (error - lastError) / dt;

You can, of course, try other differentiation schemes, for example, three-point, or five-point, or ... but it still won't help. Well, they don’t like the derivatives of sharp jumps - at such points the function is not differentiable. However, it is worth experimenting with different differentiation and integration schemes, but then not in this article.


I think that the time has come to build graphs of the transient process: step action from S(t) = 0 to SP(t) = 90 degrees for a body weighing 1 kg, a force arm of 1 meter long and a differentiation grid step of 0.02 s - just like in our example on Unity3D (actually not quite, when constructing these graphs, it was not taken into account that the moment of inertia depends on the geometry of a rigid body, so the transient process will be slightly different, but still similar enough for demonstration). All values ​​on the graph are given in absolute values:


Hmm, what's going on here? Where did the PID controller response go?


Congratulations, we've just encountered the "kick" phenomenon. Obviously, at the time when the process is still PV = 0, and the setpoint is already SP = 90, then with numerical differentiation we obtain the value of the derivative of the order of 4500, which is multiplied by Kd=0.2 and add up with a proportional term, so that at the output we get the value of the angular acceleration of 990, and this is already a form of abuse of the Unity3D physical model (angular velocities will reach 18000 deg / s ... I think this is the limiting value of the angular velocity for RigidBody2D).


  • Maybe it's worth choosing the coefficients with knobs so that the jump is not so strong?
  • Not! The best thing we can achieve in this way is a small amplitude of the derivative jump, however, the jump itself will remain the same, while it is possible to screw up to the complete inefficiency of the differential component.

However, you can experiment.

Attempt number two. Saturation

It is logical that drive unit(in our case, SpaceShip's virtual maneuvering thrusters) cannot work out any large values ​​that our insane regulator can give out. So the first thing we do is saturate the output of the regulator:


public float ControlRotate(Vector2 rotate, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate the error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); // Get corrective acceleration CO = _angleController.Update(angleError, dt); //Saturate MV = CO; if (MV > thrust) MV = thrust; if (MV< -thrust) MV = -thrust; return MV; }

And once again the rewritten class Ship completely looks like this

namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public GameObject _flame; public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _targetAngle = 0f; public float _angle = 0f; public float _thrust = 1f; public SimplePID _angleController = new SimplePID(0.1f,0f,0.05f); public void FixedUpdate() ( _torque = ControlRotate(_targetAngle, _thrust); _force = ControlForce(_movement); _rb2d.AddTorque(_torque); _rb2d.AddRelativeForce(_force); ) public float ControlRotate(float targetAngle, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles .z, targetAngle); //Get corrective acceleration CO = _angleController.Update(angleError, dt); //Saturate MV = CO; if (MV > thrust) MV = thrust; if (MV< -thrust) MV = -thrust; return MV; } public Vector2 ControlForce(Vector2 movement) { Vector2 MV = new Vector2(); if (movement != Vector2.zero) { if (_flame != null) { _flame.SetActive(true); } } else { if (_flame != null) { _flame.SetActive(false); } } MV = movement * _thrust; return MV; } public void Update() { } } }


The final scheme of our self-propelled guns will then become like this


At the same time, it becomes clear that the controller output CO(t) slightly different from the process variable MV(t).


Actually from this place you can already add a new game entity - drive unit, through which the process will be controlled, the logic of which can be more complex than just Mathf.Clamp (), for example, you can introduce discretization of values ​​​​(so as not to overload the game physics with values ​​\u200b\u200breaching sixths after the decimal point), a dead zone (again, not it makes sense to overload physics with ultra-small reactions), introduce a delay into the control and non-linearity (for example, a sigmoid) of the drive, and then see what happens.


When we start the game, we will find that the spaceship has finally become controllable:



If you build graphs, you can see that the controller's reaction has already become like this:


Normalized values ​​are already used here, the angles are divided by the SP value, and the controller output is normalized relative to the maximum value at which saturation is already taking place.

Below is a well-known table of the influence of increasing the parameters of the PID controller ( how to reduce the font, otherwise the meringue hyphenation table does not climb?):



And the general algorithm for manual tuning of the PID controller is as follows:


  1. We select the proportional coefficients with the differential and integral links turned off until self-oscillations begin.
  2. Gradually increasing the differential component, we get rid of self-oscillations
  3. If there is a residual control error (displacement), then we eliminate it due to the integral component.

There are no general values ​​for the PID controller parameters: specific values ​​​​depend solely on the process parameters (its transfer characteristic): a PID controller that works perfectly with one control object will be inoperable with another. Moreover, the coefficients at the proportional, integral and differential components are also interdependent.


Attempt number three. Once again derivatives

Having attached a crutch in the form of limiting the values ​​of the controller output, we have not solved the main problem of our controller - the differential component does not feel well with a step change in the error at the controller input. In fact, there are many other crutches, for example, at the time of an abrupt change in SP, "turn off" the differential component or put low-pass filters between SP(t) and an operation due to which a smooth increase in the error will occur, or you can completely turn around and screw in a real Kalman filter to smooth the input data. In general, there are a lot of crutches, and add observer Of course I would like to, but not this time.


Therefore, we will return to the derivative of the mismatch error again and look at it carefully:



Didn't notice anything? If you look closely, you will find that, in general, SP(t) does not change in time (except for the moments of a step change, when the controller receives a new command), i.e. its derivative is zero:





In other words, instead of the error derivative, which is differentiable not everywhere we can use the derivative of the process, which in the world of classical mechanics is usually continuous and differentiable everywhere, and the scheme of our ACS will already take the following form:




We modify the controller code:


using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Assets.Scripts.Regulator ( public class SimplePID ( public float Kp, Ki, Kd; private float P, I, D; private float lastPV = 0f; public SimplePID() ( Kp = 1f; Ki = 0f; Kd = 0.2f ; ) public SimplePID(float pFactor, float iFactor, float dFactor) ( this.Kp = pFactor; this.Ki = iFactor; this.Kd = dFactor; ) public float Update(float error, float PV, float dt) ( P = error; I += error * dt; D = -(PV - lastPV) / dt; lastPV = PV; float CO = Kp * P + Ki * I + Kd * D; return CO; ) ) )

And let's change the ControlRotate method a bit:


public float ControlRotate(Vector2 rotate, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate the error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); // Get corrective acceleration CO = _angleController.Update(angleError, _myTransform.eulerAngles.z, dt); //Saturate MV = CO; if (CO >< -thrust) MV = -thrust; return MV; }

And-and-and-and ... if you run the game, you will find that in fact nothing has changed since the last attempt, which was required to be proven. However, if we remove the saturation, then the regulator response graph will look like this:


jump CO(t) is still present, but it is no longer as big as it was at the very beginning, and most importantly, it has become predictable, because is provided exclusively by the proportional component, and is limited by the maximum possible mismatch error and the proportional gain of the PID controller (and this already hints that Kp it makes sense to choose less than unity, for example, 1/90f), but does not depend on the differentiation grid step (i.e., dt). In general, I strongly recommend using the derivative of the process, and not the errors.


I think now it won’t surprise anyone, but you can replace it with the same way, but we won’t dwell on this, you can experiment yourself and tell in the comments what came of it (most interesting)

Attempt number four. Alternative implementations of the PID controller

In addition to the ideal representation of the PID controller described above, in practice the standard form is often used, without coefficients Ki and kd, instead of which temporary constants are used.


This approach is due to the fact that a number of PID tuning techniques are based on the frequency response of the PID controller and the process. Actually, the whole TAU revolves around the frequency characteristics of processes, so for those who want to go deeper, and, suddenly, faced with an alternative nomenclature, I will give an example of the so-called. standard form PID controller:




where, is the differentiation constant that affects the prediction of the state of the system by the regulator,
- integration constant, which affects the error averaging interval by the integral link.


The basic principles of tuning a PID controller in standard form are similar to an idealized PID controller:

  • an increase in the proportional coefficient increases the speed and reduces the margin of stability;
  • with a decrease in the integral component, the control error decreases faster over time;
  • decrease in the constant of integration reduces the margin of stability;
  • an increase in the differential component increases the margin of stability and speed

The source code of the standard form, you can find under the spoiler

namespace Assets.Scripts.Regulator ( public class StandardPID ( public float Kp, Ti, Td; public float error, CO; public float P, I, D; private float lastPV = 0f; public StandardPID() ( Kp = 0.1f; Ti = 10000f; Td = 0.5f; bias = 0f; ) public StandardPID(float Kp, float Ti, float Td) ( this.Kp = Kp; this.Ti = Ti; this.Td = Td; ) public float Update(float error, float PV, float dt) ( this.error = error; P = error; I += (1 / Ti) * error * dt; D = -Td * (PV - lastPV) / dt; CO = Kp * ( P + I + D); lastPV = PV; return CO; ) ) )

The default values ​​are Kp = 0.01, Ti = 10000, Td = 0.5 - with these values, the ship turns fairly quickly and has some margin of stability.


In addition to this form of PID controller, the so-called. recurrent form:



We will not dwell on it, because. it is relevant primarily for hardware programmers working with FPGAs and microcontrollers, where such an implementation is much more convenient and efficient. In our case - let's do something on Unity3D - this is just another implementation of the PID controller, which is no better than others and even less understandable, so once again we will rejoice together how good it is to program in cozy C #, and not in creepy and the scary VHDL, for example.

instead of a conclusion. Where else to add a PID controller

Now let's try to complicate the control of the ship a little using two-loop control: one PID controller, already familiar to us _angleController, is still responsible for the angular positioning, but the second - the new one, _angularVelocityController - controls the rotation speed:


public float ControlRotate(float targetAngle, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; _angle = _myTransform.eulerAngles.z; //Rotation angle controller float angleError = Mathf.DeltaAngle(_angle, targetAngle); float torqueCorrectionForAngle = _angleController.Update(angleError, _angle, dt); //Velocity stabilization controller float angularVelocityError = -_rb2d.angularVelocity; float torqueCorrectionForAngularVelocity = _angularVelocityController.Update(angularVelocityError, -angularVelocityError, dt); //Total controller output CO = torqueCorrectionForAngle + torqueCorrectionForAngularVelocity;//Discrete in steps of 100 CO = Mathf.Round(100f * CO) / 100f;///Saturate MV = CO;if (CO > thrust) MV = thrust;if (CO< -thrust) MV = -thrust; return MV; }

The purpose of the second regulator is to dampen excess angular velocities by changing the torque - this is akin to the presence of angular friction, which we turned off when we created the game object. Such a control scheme [perhaps] will make it possible to obtain a more stable behavior of the ship, and even get by with only proportional control coefficients - the second regulator will dampen all oscillations, performing a function similar to the differential component of the first regulator.


In addition, we will add a new player input class - PlayerInputCorvette, in which the turns will be carried out by pressing the left-right keys, and we will leave the target designation with the mouse for something more useful, for example, to control the turret. At the same time, we now have such a parameter as _turnRate - which is responsible for the speed / responsiveness of the turn (it's not clear where to put it better in InputCOntroller or still Ship).


public class PlayerCorvetteInput: BaseInputController ( public float _turnSpeed ​​= 90f; public override void ControlRotate() ( // Find the Vector3 mouse pointer worldPos = Input.mousePosition; worldPos = Camera.main.ScreenToWorldPoint(worldPos); // Store the relative position of the mouse pointer float dx = -this.transform.position.x + worldPos.x; float dy = -this.transform.position.y + worldPos.y; // Pass in the direction of the mouse pointer Vector2 target = new Vector2(dx, dy); _agentBody. _target = target; // Calculate rotation according to keystroke _agentBody._rotation -= Input.GetAxis("Horizontal") * _turnSpeed ​​* Time.deltaTime; ) public override void ControlForce() ( //Pass movement _agentBody._movement = Input .GetAxis("Vertical") * Vector2.up; ) )

Also, for clarity, we throw a script on our knees to display debugging information

namespace Assets.Scripts.SpaceShooter.UI( public class Debugger: MonoBehaviour( Ship _ship; BaseInputController _controller; List _pids = new List (); List _names = new List (); Vector2 _orientation = new Vector2(); // Use this for initialization void Start() ( _ship = GetComponent (); _controller = GetComponent (); _pids.Add(_ship._angleController); _names.Add("Angle controller"); _pids.Add(_ship._angularVelocityController); _names.Add("Angular velocity controller"); ) // Update is called once per frame void Update() ( DrawDebug(); ) Vector3 GetDiretion(eSpriteRotation spriteRotation) ( switch (_controller._spriteOrientation) ( case eSpriteRotation.Rigth: return transform.right; case eSpriteRotation.Up: return transform .up; case eSpriteRotation.Left: return -transform.right; case eSpriteRotation.Down: return -transform.up; ) return Vector3.zero; ) void DrawDebug() ( // Vector3 rotation direction vectorToTarget = transform.position + 5f * new Vector3(-Mathf.Sin(_ship._targetAngle * Mathf.Deg2Rad), Mathf.Cos(_ship._targetAngle * Mathf.Deg2Rad), 0f); // Current Direction Vector3 heading = transform.position + 4f * GetDirection(_controller. _spriteOrientation); //Angular acceleration Vector3 torque = heading - transform.right * _ship._Torque; Debug.DrawLine(transform.position, vectorToTarget, Color.white); Debug.DrawLine(transform.position, heading, Color.green); Debug.DrawLine(heading, torque, Color.red); ) void OnGUI( ) ( float x0 = 10; float y0 = 100; float dx = 200; floatdy=40; floatSliderKpMax = 1; floatSliderKpMin = 0; floatSliderKiMax = .5f; float SliderKiMin = -.5f; floatSliderKdMax = .5f; float SliderKdMin = 0; int i = 0; foreach (SimplePID pid in _pids) ( y0 += 2 * dy; GUI.Box(new Rect(25 + x0, 5 + y0, dx, dy), ""); pid.Kp = GUI.HorizontalSlider(new Rect( pid.Ki = GUI.HorizontalSlider(new Rect(25 + x0, 20 + y0, 200, 10), pid.Ki, SliderKiMin, SliderKiMax); pid.Kd = GUI.HorizontalSlider(new Rect(25 + x0, 35 + y0, 200, 10), pid.Kd, SliderKdMin, SliderKdMax); GUIStyle style1 = new GUIStyle(); style1.alignment = TextAnchor.MiddleRight; style1.fontStyle = FontStyle.Bold; style1.normal.textColor = Color.yellow; style1.fontSize = 9; GUI.Label(new Rect(0 + x0, 5 + y0, 20, 10), "Kp ", style1); GUI.Label(new Rect(0 + x0, 20 + y0, 20, 10), "Ki", ​​style1); GUI.Label(new Rect(0 + x0, 35 + y0, 20, 10 ), "Kd", style1); GUIStyle style2 = new GUIStyle(); style2.alignment = TextAnchor.MiddleLeft; style2.fontStyle = FontStyle.Bold; style2.normal.textColor = Color.yellow; style2.fontSize = 9; GUI .TextField(new Rect(235 + x0, 5 + y0, 60, 10), pid.Kp.ToString(), style2); GUI.TextField(new Rect(235 + x0, 20 + y0, 60, 10), pid. Ki.ToString(), style2); GUI.TextField(new Rect(235 + x0, 35 + y0, 60, 10), pid.Kd.ToString(), style2); GUI.Label(new Rect(0 + x0, -8 + y0, 200, 10), _names, style2); ) ) ) )


The Ship class has also undergone irreversible mutations and should now look like this:

namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public GameObject _flame; public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _targetAngle = 0f; public float _angle = 0f; public float _thrust = 1f; public SimplePID _angleController = new SimplePID(0.1f,0f,0.05f); public SimplePID _angularVelocityController = new SimplePID(0f,0f,0f); private float _torque = 0f; public float _Torque ( get ( return _torque; ) ) private Vector2 _force = new Vector2(); public Vector2 _Force ( get ( return _force; ) ) public void FixedUpdate() ( _torque = ControlRotate(_targetAngle, _thrust); _force = ControlForce(_movement, _thrust); _rb2d.AddTorque( _torque); _rb2d.AddRelativeForce(_force); ) public float ControlRotate(float targetAngle, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; _angle = _myTransform.eulerAngles.z; //Controller float angleError = Mathf.DeltaAngle(_angle, targetAngle); float torqueCorrectionForAngle = _angleController.Update(angleError, _angle, dt); //Velocity stabilization controller float angularVelocityError = -_rb2d.angularVelocity; float torqueCorrectionForAngularVelocity = _angularVelocityController.Update(angularVelocityError, -angularVelocityError, dt); //Total controller output CO = torqueCorrectionForAngle + torqueCorrectionForAngularVelocity; //Discrete in steps of 100 CO = Mathf.Round(100f * CO) / 100f; //Saturate MV = CO; if (CO > thrust) MV = thrust; if (CO< -thrust) MV = -thrust; return MV; } public Vector2 ControlForce(Vector2 movement, float thrust) { Vector2 MV = new Vector2(); if (movement != Vector2.zero) { if (_flame != null) { _flame.SetActive(true); } } else { if (_flame != null) { _flame.SetActive(false); } } MV = movement * thrust; return MV; } public void Update() { } } }

Theory of automatic control(TAU) is a scientific discipline that studies the processes of automatic control of objects of different physical nature. At the same time, with the help of mathematical means, the properties of automatic control systems are revealed and recommendations for their design are developed.

Story

For the first time, information about automata appeared at the beginning of our era in the works of Heron of Alexandria “Pneumatics” and “Mechanics”, which describe automata created by Heron himself and his teacher Ctesibius: a pneumatic automaton for opening the doors of a temple, a water organ, an automaton for selling holy water, etc. Heron's ideas were far ahead of their time and did not find application in his era.

Stability of linear systems

Sustainability- the property of the ACS to return to a given or close to it steady state after any disturbance.

Sustainable ACS- a system in which transient processes are damped.

Operator form of writing a linearized equation.

y(t) = y mouth(t)+y P=y vyn(t)+y St.

y mouth(y vyn) is a particular solution of the linearized equation.

y P(y St.) is the general solution of the linearized equation as a homogeneous differential equation, i.e.

The ACS is stable if the transient processes y n (t) caused by any perturbations will be damped over time, that is, when

Solving the differential equation in the general case, we obtain complex roots p i , p i+1 = ±α i ± jβ i

Each pair of complex conjugate roots corresponds to the following component of the transient equation:

From the results obtained, it can be seen that:

Criteria for sustainability

Routh criterion

To determine the stability of the system, tables of the form are built:

Odds Strings column 1 column 2 column 3
1
2
3
4

For the stability of the system, it is necessary that all elements of the first column have positive values; if there are negative elements in the first column, the system is unstable; if at least one element is equal to zero, and the rest are positive, then the system is on the boundary of stability.

Hurwitz criterion

Hurwitz determinant

Theorem: For the stability of a closed ACS, it is necessary and sufficient that the Hurwitz determinant and all its minors be positive at

Mikhailov criterion

Let us replace , where ω is the angular frequency of oscillations corresponding to the purely imaginary root of the given characteristic polynomial.

Criterion: for the stability of a linear system of the nth order, it is necessary and sufficient that the Mikhailov curve, constructed in coordinates, passes successively through n quadrants.

Consider the relationship between the Mikhailov curve and the signs of its roots(α>0 and β>0)

1) The root of the characteristic equation is a negative real number

2) The root of the characteristic equation is a positive real number

The factor corresponding to the given root

3) The root of the characteristic equation is a complex pair of numbers with a negative real part

The factor corresponding to the given root

4) The root of the characteristic equation is a complex pair of numbers with a positive real part

The factor corresponding to the given root

Nyquist criterion

The Nyquist criterion is a graph-analytical criterion. Its characteristic feature is that the conclusion about the stability or instability of a closed system is made depending on the type of amplitude-phase or logarithmic frequency characteristics of an open system.

Let the open system be represented as a polynomial

then we make a substitution and get:

For more convenient construction of the hodograph for n>2, we bring the equation (*) to the “standard” form:

With this representation, the module A(ω) = | W(jω)| is equal to the ratio of the modules of the numerator and denominator, and the argument (phase) ψ(ω) is the difference between their arguments. In turn, the modulus of the product of complex numbers is equal to the product of the modules, and the argument is the sum of the arguments.

Modules and Arguments Corresponding to Transfer Function Factors

Factor
k k 0
p ω

Then we construct a hodograph for the auxiliary function , for which we will change

At , and at (because n

To determine the resulting angle of rotation, we find the difference between the arguments of the numerator and denominator

The polynomial of the numerator of the auxiliary function has the same degree as the polynomial of its denominator, which implies , therefore, the resulting rotation angle of the auxiliary function is 0. This means that for the stability of the closed system, the hodograph of the auxiliary function vector should not cover the origin, and the hodograph of the function , respectively, a point with coordinates

Part 1. Theory of Automatic Control (TAU)

Lecture 1. Basic terms and definitions of TAU. (2 hours)

Basic concepts.

Control systems of modern chemical-technological processes are characterized by a large number of technological parameters, the number of which can reach several thousand. To maintain the required mode of operation, and ultimately the quality of the products, all these quantities must be kept constant or changed according to a certain law.

The physical quantities that determine the course of the technological process are called process parameters . For example, process parameters can be: temperature, pressure, flow, voltage, etc.

The parameter of the technological process, which must be kept constant or changed according to a certain law, is called controlled variable or adjustable parameter .

The value of the controlled variable at the considered moment of time is called instantaneous value .

The value of the controlled variable obtained at the considered moment of time on the basis of the data of a certain measuring device is called its measured value .

Example 1 Scheme of manual control of the temperature of the drying cabinet.


It is required to manually maintain the temperature in the drying cabinet at the level of T ass.

The human operator, depending on the readings of the mercury thermometer RT, turns on or off the heating element H using the knife switch P. ¨

Based on this example, you can enter definitions:

Control object (object of regulation, OS) - a device, the required mode of operation of which must be supported from the outside by specially organized control actions.



Control – formation of control actions that provide the required operating mode of the OS.

Regulation - a particular type of control, when the task is to ensure the constancy of any output value of the OS.

Automatic control - management carried out without the direct participation of a person.

Input action(X)– impact applied to the input of the system or device.

Output action(Y)- the impact issued at the output of the system or device.

External influence - the impact of the external environment on the system.

The block diagram of the control system for example 1 is shown in fig. 1.2.


Rice. 1.3

Example 3 Scheme of ACP temperature with a measuring bridge.

When the temperature of the object is equal to the specified one, the measuring bridge M (see Fig. 1.4) is balanced, the input of the electronic amplifier of the EI does not receive a signal and the system is in equilibrium. When the temperature deviates, the resistance of the thermistor R T changes and the equilibrium of the bridge is disturbed. A voltage appears at the ED input, the phase of which depends on the sign of the temperature deviation from the set one. The voltage amplified in the EU is supplied to the motor D, which moves the motor of the autotransformer AT in the appropriate direction. When the temperature reaches the setpoint, the bridge will be balanced and the engine will turn off.


Definitions:

Setting influence (the same as the input action X) - the impact on the system that determines the required law of change of the controlled variable).

Control action (u) - influence of the control device on the control object.

control device (CU) - a device that influences the control object in order to ensure the required mode of operation.

Disturbing influence (f) - an action that tends to break the required functional relationship between the setting action and the controlled value.

Control error (e = x - y) - the difference between the prescribed (x) and actual (y) values ​​of the controlled variable.

Regulator (P) - a set of devices connected to a regulated object and providing automatic maintenance of the set value of its regulated value or automatic change of it according to a certain law.

Automatic control system (ACP) - an automatic system with a closed circuit of influence, in which the control (u) is generated as a result of comparing the true value of y with a given value of x.

An additional connection in the block diagram of the ACP, directed from the output to the input of the considered section of the chain of influences, is called feedback (FB). Feedback can be negative or positive.

ACP classification.

1. By purpose (by the nature of the task change):

· stabilizing ACP - a system whose operation algorithm contains an instruction to maintain the controlled value at a constant value (x = const);

· software ACP - a system whose operation algorithm contains an instruction to change the controlled variable in accordance with a predetermined function (x is changed by software);

· tracking ACP - a system whose operation algorithm contains an instruction to change the controlled value depending on a previously unknown value at the ACP input (x = var).

2. By the number of circuits:

· single-loop - containing one contour,

· multi-loop - containing several contours.

3. According to the number of adjustable values:

· one-dimensional - systems with 1 controlled variable,

· multidimensional - systems with several adjustable values.

Multidimensional ACS, in turn, are subdivided into systems:

a) unrelated regulation, in which the regulators are not directly related and can only interact through a common control object for them;

b) coupled regulation, in which regulators of different parameters of the same technological process are interconnected outside the object of regulation.

4. By functional purpose:

ACP for temperature, pressure, flow, level, voltage, etc.

5. By the nature of the signals used for control:

continuous,

Discrete (relay, pulse, digital).

6. By the nature of mathematical relationships:

linear, for which the principle of superposition is valid;

non-linear.

Superposition principle (overlay): If several input actions are applied to the input of the object, then the reaction of the object to the sum of the input actions is equal to the sum of the reactions of the object to each action separately:


L (x 1 + x 2) \u003d L (x 1) + L (x 2),

where L is a linear function (integration, differentiation, etc.).

7. According to the type of energy used for regulation:

pneumatic,

hydraulic,

electrical,

mechanical, etc.

8. According to the principle of regulation:

· by deviation :

The vast majority of systems are built on the principle of feedback - regulation by deviation (see Fig. 1.7).

The element is called the adder. Its output is equal to the sum of the inputs. The blackened sector indicates that this input signal must be taken with the opposite sign.

· out of indignation .

These systems can be used if it is possible to measure the disturbance (see Fig. 1.8). The diagram shows K - amplifier with gain K.

· combined - combine the features of previous ACPs.

This method (see Fig. 1.9) achieves a high quality of control, but its application is limited by the fact that the perturbing effect f cannot always be measured.


Basic models.

The operation of the regulatory system can be described verbally. So, in paragraph 1.1, the system for controlling the temperature of the drying cabinet is described. A verbal description helps to understand the principle of operation of the system, its purpose, features of operation, etc. However, most importantly, it does not give quantitative estimates of the quality of regulation, and therefore is not suitable for studying the characteristics of systems and building automated control systems. Instead, TAU uses more precise mathematical methods for describing the properties of systems:

static characteristics,

dynamic characteristics,

· differential equations,

transfer functions,

frequency characteristics.

In any of these models, the system can be represented as a link with input actions X, disturbances F and output actions Y

Under the influence of these influences, the output value may change. In this case, when a new task is received at the input of the system, it must provide, with a given degree of accuracy, a new value of the controlled variable in the steady state.

steady state is a mode in which the discrepancy between the true value of the controlled variable and its set value will be constant over time.

Static characteristics.

static characteristic element is the dependence of the steady-state values ​​of the output quantity on the value of the quantity at the input of the system, i.e.

y mouth \u003d j (x).

The static characteristic (see Fig. 1.11) is often depicted graphically as a curve y(x).

static An element is called for which, with a constant input action, a constant output value is established over time. For example, when different voltage values ​​are applied to the heater input, it will heat up to the temperature values ​​corresponding to these voltages.

astatic an element is called, in which, with a constant input action, the output signal continuously grows at a constant speed, acceleration, etc.

Linear static element is called a non-inertial element with a linear static characteristic:

y mouth \u003d K * x + a 0.

As you can see, the static characteristic of the element in this case has the form of a straight line with a slope coefficient K.

Linear static characteristics, unlike non-linear ones, are more convenient for studying due to their simplicity. If the object model is non-linear, then it is usually converted to a linear form by linearization.

ACS is called static , if at a constant input action the control error e tends to a constant value, depending on the magnitude of the action.

ACS is called astatic , if at a constant input action the control error tends to zero, regardless of the magnitude of the action.

Laplace transformations.

The study of ASR is greatly simplified by using applied mathematical methods of operational calculus. For example, the functioning of a certain system is described by a DE of the form

, (2.1)

where x and y are the input and output quantities. If in this equation instead of x(t) and y(t) we substitute the functions X(s) and Y(s) of the complex variable s such that

and , (2.2)

then the original DE under zero initial conditions is equivalent to the linear algebraic equation

a 2 s 2 Y(s) + a 1 s Y(s) + a 0 Y(s) = b 1 X(s) + b 0 X(s).

Such a transition from a differential equation to an algebraic equation is called Laplace transform , formulas (2.2), respectively Laplace transform formulas , and the resulting equation - operator equation .

The new functions X(s) and Y(s) are called images x(t) and y(t) according to Laplace, while x(t) and y(t) are originals with respect to X(s) and Y(s).

The transition from one model to another is quite simple and consists in replacing the signs of differentials with operators s n , the signs of integrals with factors , and x(t) and y(t) themselves with images X(s) and Y(s).

For the reverse transition from the operator equation to functions of time, the method is used inverse Laplace transform . The general formula for the inverse Laplace transform is:

, (2.3)

where f(t) - original, F(jw) - image at s = jw, j - imaginary unit, w - frequency.

This formula is quite complicated, so special tables were developed (see Tables 1.1 and 1.2), which summarize the most common functions F(s) and their originals f(t). They make it possible to dispense with the direct use of formula (2.3).

Table 1.2 - Laplace transforms

Original x(t) Image X(s)
d-function
t
t2
t n
e-a-t
a. x(t) a. X(s)
x(t - a) X(s) . e - a s
s n. X(s)

Table 1.2 - Formulas for the inverse Laplace transform (addition)

The law of change of the output signal is usually a function to be found, and the input signal is usually known. Some typical input signals were discussed in Section 2.3. Here are their images:

a single step action has an image X(s) = ,

delta function X(s) = 1,

linear action X(s) = .

Example. Solution of DE using Laplace transforms.

Suppose the input signal has the form of a single step action, i.e. x(t) = 1. Then the image of the input signal X(s) = .

We transform the original DE according to Laplace and substitute X(s):

s 2 Y + 5sY + 6Y = 2sX + 12X,

s 2 Y + 5sY + 6Y = 2s + 12,

Y(s 3 + 5s 2 + 6s) = 2s + 12.

The expression for Y is defined:

.

The original of the received function is not in the table of originals and images. To solve the problem of its search, the fraction is divided into the sum of simple fractions, taking into account the fact that the denominator can be represented as s(s + 2)(s + 3):

= = + + =

Comparing the resulting fraction with the original one, we can compose a system of three equations with three unknowns:

M 1 + M 2 + M 3 = 0 M 1 = 2

5 . M 1 + 3. M 2 + 2. M 3 \u003d 2 à M 2 \u003d -4

6. M 1 = 12 M 3 = 2

Therefore, a fraction can be represented as the sum of three fractions:

= - + .

Now, using table functions, the original output function is determined:

y(t) = 2 - 4 . e -2 t + 2 . e -3 t . ¨

transfer functions.

Examples of typical links.

The link of the system is its element, which has certain properties in a dynamic sense. The links of control systems may have a different physical basis (electrical, pneumatic, mechanical, etc. links), but belong to the same group. The ratio of input and output signals in the links of one group are described by the same transfer functions.

The simplest typical links:

amplifying,

integrating,

The differentiating

aperiodic,

oscillatory,

delayed.

1) Reinforcing link.

The link amplifies the input signal by K times. The link equation y \u003d K * x, the transfer function W (s) \u003d K. The parameter K is called gain .

The output signal of such a link exactly repeats the input signal, amplified by K times (see Fig. 1.15).

Examples of such links are: mechanical transmissions, sensors, inertialess amplifiers, etc.

2) Integrating.

2.1) Ideal integrator.

The output value of an ideal integrator is proportional to the integral of the input value.

; W(s) =

When an action link is applied to the input, the output signal constantly increases (see Fig. 1.16).

This link is astatic, i.e. does not have a steady state.

2.2) Real integrator.

The transfer function of this link has the form:

The transient response, in contrast to the ideal link, is a curve (see Fig. 1.17).

An example of an integrating link is a DC motor with independent excitation, if the stator supply voltage is taken as the input action, and the rotor rotation angle is taken as the output action.

3) Differentiating.

3.1) The ideal differentiator.

The output value is proportional to the time derivative of the input:

With a stepped input, the output is a pulse (d-function).

3.2) Real differentiating.

Ideal differentiating links are not physically realizable. Most of the objects that are differentiating links are real differentiating links. The transient response and transfer function of this link have the form:

4) Aperiodic (inertial).

This link corresponds to DE and PF of the form:

; W(s) = .

Let's determine the nature of the change in the output value of this link when a step action of the value x 0 is applied to the input.

Step action image: X(s) = . Then the image of the output quantity:

Y(s) = W(s) X(s) = = K x 0 .

Let's decompose the fraction into simple ones:

= + = = - = -

The original of the first fraction according to the table: L -1 ( ) = 1, the second:

Then we finally get:

y(t) = K x 0 (1 - ).

The constant T is called time constant.

Most thermal objects are aperiodic links. For example, when voltage is applied to the input of an electric furnace, its temperature will change according to a similar law (see Fig. 1.19).

5) Oscillating link has DE and PF of the form

,

W(s) = .

When a stepped action with amplitude x 0 is applied to the input, the transition curve will be

have one of two types: aperiodic (at T 1 ³ 2T 2) or oscillatory (at T 1< 2Т 2).

6) Delayed.

y(t) = x(t - t), W(s) = e - t s .

The output value y exactly repeats the input value x with some delay t. Examples: movement of cargo along a conveyor, movement of liquid through a pipeline.

Link connections.

Since the object under study is divided into links in order to simplify the analysis of functioning, after determining the transfer functions for each link, the task arises of combining them into one transfer function of the object. The type of the transfer function of the object depends on the sequence of connecting the links:

1) Serial connection.

W about \u003d W 1. W2. W 3 ...

When the links are connected in series, their transfer functions are multiplied.

2) Parallel connection.

W about \u003d W 1 + W 2 + W 3 + ...

When the links are connected in parallel, their transfer functions are added.

3) Feedback

Transfer function according to the task (x):

"+" corresponds to negative OS,

"-" - positive.

To determine the transfer functions of objects with more complex connections of links, either sequential enlargement of the circuit is used, or they are converted according to the Meson formula.

Transfer functions of ASR.

For research and calculation, the structural diagram of the ASR is brought to the simplest standard form "object - controller" by means of equivalent transformations.

This is necessary, firstly, in order to determine the mathematical dependencies in the system, and, secondly, as a rule, all engineering methods for calculating and determining the controller settings are applied for such a standard structure.

In the general case, any one-dimensional ACP with main feedback can be reduced to this form by gradually increasing the links.

If the output of the system y is not applied to its input, then we will get an open control system, the transfer function of which is defined as the product:

W ¥ = W p . W y

(W p - PF of the controller, W y - PF of the control object).

That is, the sequence of links W p and W y can be replaced by one link with W ¥ . The transfer function of a closed system is usually denoted as Ф(s). It can be expressed in terms of W ¥ :

This transfer function Ф з (s) determines the dependence of y on x and is called the transfer function of a closed system along the channel of the master influence (by assignment).

For ASR, there are also transfer functions for other channels:

Ф e (s) = = - by mistake,

Ф in (s) = = - by perturbation.

Since the transfer function of an open system is in the general case a fractional-rational function of the form W ¥ = , then the transfer functions of a closed system can be transformed:

Ф з (s) = = , Ф e (s) = = .

As can be seen, these transfer functions differ only in the expressions of the numerators. The denominator expression is called characteristic expression of a closed system and is denoted as D s (s) \u003d A (s) + B (s), while the expression in the numerator of the open-loop transfer function W ¥ is called characteristic expression of an open system B(s).

Frequency characteristics.

LCH examples.

1. Low pass filter (LPF)

LACH LPCH Circuit Example

The low-pass filter is designed to suppress high-frequency influences.

2. High pass filter (HPF)

LACH LPCH Circuit Example

The high-pass filter is designed to suppress low-frequency influences.

3. Barrier filter.

The trap filter only suppresses a certain range of frequencies.

LAFC and LFC Circuit Example



Stability criteria.

Sustainability.

An important indicator of ASR is stability, since its main purpose is to maintain a given constant value of the controlled parameter or change it according to a certain law. When the controlled parameter deviates from the specified value (for example, under the influence of a disturbance or a change in the reference), the controller acts on the system in such a way as to eliminate this deviation. If the system as a result of this action returns to its original state or passes to another equilibrium state, then such a system is called sustainable . If, on the other hand, oscillations arise with ever-increasing amplitude or a monotonous increase in the error e occurs, then the system is called unstable .

In order to determine whether a system is stable or not, stability criteria are used:

1) root criterion,

2) Stodola's criterion,

3) Hurwitz criterion,

4) Nyquist criterion,

5) Mikhailov's criterion, etc.

The first two criteria are necessary criteria for the stability of individual links and open systems. The Hurwitz criterion is algebraic and is designed to determine the stability of closed systems without delay. The last two criteria belong to the group of frequency criteria, since they determine the stability of closed systems by their frequency characteristics. Their feature is the possibility of application to closed systems with delay, which is the vast majority of control systems.

root criterion.

The root criterion determines the stability of the system by the form of the transfer function. The dynamic characteristic of the system, which describes the main behavioral properties, is a characteristic polynomial, which is in the denominator of the transfer function. By equating the denominator to zero, one can obtain a characteristic equation, the roots of which determine stability.

The roots of the characteristic equation can be both real and complex, and are plotted on the complex plane to determine stability (see Fig. 1.34).

(The symbol denotes the roots of the equation).

Types of roots of the characteristic equation:

Valid:

positive (root number 1);

negative (2);

zero (3);

Complex

complex conjugates (4);

purely imaginary (5);

According to the multiplicity, the roots are:

single (1, 2, 3);

conjugate (4, 5): s i = a ± jw;

multiples (6) s i = s i +1 = …

The root criterion is formulated as follows:

A linear ASR is stable if all the roots of the characteristic equation lie in the left half-plane. If at least one root is on the imaginary axis, which is the stability boundary, then the system is said to be on the stability boundary. If at least one root is in the right half-plane (regardless of the number of roots in the left), then the system is unstable.

In other words, all real roots and real parts of complex roots must be negative. Otherwise, the system is unstable.

Example 3.1. The transfer function of the system has the form:

.

Characteristic equation: s 3 + 2s 2 + 2.25s + 1.25 = 0.

Roots: s 1 \u003d -1; s 2 \u003d -0.5 + j; s 3 \u003d -0.5 - j.

Therefore, the system is stable. ¨

Stodola's criterion.

This criterion is a consequence of the previous one and is formulated as follows: A linear system is stable if all coefficients of the characteristic polynomial are positive.

That is, for the gear ratio from Example 3.1, according to the Stodol criterion, it corresponds to a stable system.

Hurwitz criterion.

The Hurwitz criterion works with the characteristic polynomial of a closed system. As you know, the structural diagram of the ASR mistakenly looks like (see Fig.)

W p - transfer function of the controller,

W y - transfer function of the control object.

Let us define the transfer function for feed-forward (open-loop transfer function, see Section 2.6.4): W ¥ = W p W y .

.

As a rule, the transfer function of an open system has a fractional-rational form:

.

Then after substitution and transformation we get:

.

It follows that the characteristic polynomial of a closed system (CPC) can be defined as the sum of the numerator and denominator W ¥ :

D s (s) \u003d A (s) + B (s).

To determine the stability according to Hurwitz, a matrix is ​​built in such a way that the HPCD coefficients from a n +1 to a 0 are located along the main diagonal. To the right and left of it, the coefficients are written with indices through 2 (a 0, a 2, a 4 ... or a 1, a 3, a 5 ...). Then for a stable system it is necessary and sufficient that the determinant and all principal diagonal minors of the matrix be greater than zero.

If at least one determinant is equal to zero, then the system will be on the stability boundary.

If at least one determinant is negative, then the system is unstable regardless of the number of positive or zero determinants.

Example. Given the transfer function of an open system

.

It is required to determine the stability of a closed system by the Hurwitz criterion.

To do this, HPLC is defined:

D(s) = A(s) + B(s) = 2s 4 + 3s 3 + s 2 + 2s 3 + 9s 2 + 6s + 1 = 2s 4 + 5s 3 + 10s 2 + 6s + 1.

Since the degree of HPCD is equal to n = 4, then the matrix will have a size of 4x4. The HPLC coefficients are а 4 = 2, а 3 = 5, а 2 = 10, а 1 = 6, а 0 = 1.

The matrix looks like:

(note the similarity of the matrix rows: 1 with 3 and 2 with 4). Qualifiers:

Δ1 = 5 > 0,

,

Δ 4 \u003d 1 * Δ 3 \u003d 1 * 209\u003e 0.

Since all determinants are positive, then ACP stable. ♦


Mikhailov's criterion.

The stability criteria described above do not work if the transfer function of the system has a delay, that is, it can be written as

,

where t is the delay.

In this case, the characteristic expression of the closed system is not a polynomial and its roots cannot be determined. To determine the stability in this case, the frequency criteria of Mikhailov and Nyquist are used.

The procedure for applying the Mikhailov criterion:

1) The characteristic expression of a closed system is written:

D s (s) \u003d A (s) + B (s) . e - t s .

Size: px

Start impression from page:

transcript

1 THEORY OF AUTOMATIC CONTROL FOR "DUTTIES" KYU Polyakov St. Petersburg 8

2 KYU Polyakov, 8 "In a university you need to present the material at a high professional level But since this level is much higher than the head of an average student, I will explain with my fingers It's not very professional, but it's understandable" Unknown teacher Foreword This manual is intended for the first acquaintance with the subject Its task is to explain “on the fingers” the basic concepts of the theory of automatic control and to make sure that after reading it you can perceive professional literature on this topic. You need to consider this manual only as a foundation, a launching pad for a serious study of a serious subject that can become very interesting and fascinating There are hundreds of textbooks on automatic control. But the whole problem is that the brain, when perceiving new information, is looking for something familiar, on which it can “hook”, and on this basis “attach” the new to already known concepts. Practice shows that reading serious textbooks are difficult for a modern student There is nothing to cling to Yes, and rigorous scientific evidence often eludes the essence of the matter, which is usually quite simple The author tried to "go down" to a lower level and build a chain from "everyday" concepts to the concepts of control theory formulas are used only where it is impossible to do without them. A mathematician will find many inconsistencies and omissions here, since (in accordance with the goals of the manual) the choice between rigor and intelligibility is always made in favor of intelligibility Little preliminary knowledge is required from the reader It is necessary to have an idea about some sections of the course of higher mathematics :) derivatives and integrals;) differential equations; 3) linear algebra, matrices; 4) complex numbers Acknowledgments The author expresses his deep gratitude to Ph.D. AN Churilov, Ph.D. VN Kalinichenko and Ph.D.

3 Kyu Polyakov, 8 Contents BASIC CONCEPTS 4 Introduction 4 Control systems4 3 What are control systems? 7 MATHEMATICAL MODELS What do you need to know to manage? Connection of input and output 3 How are models built? 4 Linearity and Nonlinearity 5 Linearization of Equations3 6 Control 7 3 LINEAR OBJECT MODELS 3 Differential Equations 3 State Space Models 33 Transition Function 34 Impulse Response (Weight Function) 4 35 Transfer Function 5 36 Laplace Transform 6 37 Transfer Function and State Space 9 38 Frequency Response 3 39 Logarithmic frequency characteristics3 4 typical dynamic links34 4 amplifier 34 4 aperiodic link34 43 oscillatory link36 44 integrating link 6 CONTROL SYSTEM ANALYSIS 47 6 Control requirements 47 6 Output process 47 63 Accuracy Stability 5 65 Stability criteria 57 66 Transient process 6 67 Frequency quality estimates 63 68 Root quality estimates 65 69 Robustness 66 7 SYNTHESIS OF REGULATORS 69 7 Classes Diagram 69 7 PID Controllers 7 73 Pole Placement Method 7 74 LPFC Correction 7 75 Combined Control 75 76 Invariance 75 77 Many Stabilizing Controllers 76 CONCLUSION 79 LITERATURE FOR FURTHER READING 8 3

4 Kyu Polyakov, 8 Basic concepts Introduction Since ancient times, man wanted to use objects and forces of nature for his own purposes, that is, to control them. You can control inanimate objects (for example, rolling a stone to another place), animals (training), people (boss subordinate) Many control tasks in the modern world are associated with technical systems of cars, ships, aircraft, machine tools. For example, it is necessary to maintain a given course of the ship, the height of the aircraft, the engine speed, the temperature in the refrigerator or in the oven. If these tasks are solved without human intervention, they talk about automatic control. Management theory tries to answer the question "how should one manage?" Until the 19th century, the science of control did not exist, although the first automatic control systems already existed (for example, windmills “taught” to turn towards the wind) The development of control theory began during the industrial revolution. At first, this direction in science was developed by mechanics to solve control problems, that is, maintaining the set value of rotational speed, temperature, pressure in technical devices (for example, in steam engines) Hence the name “automatic control theory” The processes of control and processing of information in systems of any nature are studied by the science of cybernetics. One of its sections, mainly related to technical systems, is called the theory of automatic control. In addition to the classical problems of regulation, it also deals with the optimization of control laws, The term “automatic control theory” and “automatic control theory” are sometimes used as synonyms. For example, in modern foreign literature you will find only one term control theory Control systems What does a control system consist of? In control tasks, there are always two objects, a controlled and a controlling object. The controlled object is usually called the control object or simply the object, and the control object is the regulator. For example, when controlling the rotational speed, the control object is the engine (electric motor, turbine); in the problem of stabilizing the course of a ship, a ship submerged in water; in the task of maintaining the volume level of the speaker Regulators can be built on different principles The most famous of the first mechanical regulators is the Watt centrifugal regulator to stabilize the speed of the steam turbine (in the figure on the right) When the speed increases, the balls diverge due to the increase in centrifugal force At the same time through the system levers, the damper closes slightly, reducing the flow of steam to the turbine. The temperature controller in a refrigerator or thermostat is an electronic circuit that turns on the cooling (or heating) mode if the temperature becomes higher (or lower) than the set one. In many modern systems, regulators are microprocessor devices, computers. They successfully control planes and spaceships without the participation of human steam to the turbine 4

5 Kyu Polyakov, 8 ka A modern car is literally “stuffed” with control electronics, up to on-board computers. Usually, the regulator does not act on the control object directly, but through actuators (actuators) that can amplify and convert the control signal, for example, an electric signal can “ turn into a movement of the valve that regulates the fuel consumption, or into turning the steering wheel at a certain angle In order for the regulator to “see” what is actually happening with the object, sensors are needed. Sensors most often measure those characteristics of the object that need to be controlled. In addition, the quality of control can be improved if additional information is obtained measure the internal properties of the object System structure So, a typical control system includes an object, a regulator, a drive and sensors However, a set of these elements is not yet a system To turn into a system, communication channels are needed, through which information is exchanged between elements Information can be transmitted using electric current, air (pneumatic systems), liquid (hydraulic systems), computer networks Interconnected elements are already a system that has (due to connections) special properties that individual elements and any combination of them do not have. that the object is affected by the environment external perturbations that “prevent” the controller from performing the task set. Most of the perturbations are unpredictable in advance, that is, they are random in nature. In addition, the sensors measure the parameters not accurately, but with some error, albeit a small one. In this case, they talk about “ measurement noise" by analogy with noise in radio engineering that distort signals. Summing up, we can draw a block diagram of the control system as follows: setting regulator control drive object external disturbances feedback sensors measurement noise For example, in the ship's heading control system, the control object is the ship itself, in water; to control its course, a rudder is used that changes the direction of the flow of water; digital computer controller; drive steering device, which amplifies the control electrical signal and converts it into a steering wheel; sensors measuring system that determines the actual course; external disturbances are sea waves and wind, which deviate the ship from a given course; measurement noises are sensor errors Information in the control system “goes in circles”: the controller issues a control signal to the drive, which acts directly on the object; then information about the object through the sensors returns back to the controller and everything starts anew They say that the system has feedback, that is, the controller uses information about the state of the object to develop control Systems with feedback are called closed, since information is transmitted along a closed loop 5

6 Kyu Polyakov, 8 3 How does the regulator work? The controller compares the setting signal (“setpoint”, “setpoint”, “desired value”) with the feedback signals from the sensors and determines the mismatch (control error) the difference between the setpoint and the actual state If it is zero, no control is required If there is a difference, the controller produces a control signal that tends to reduce the mismatch to zero Therefore, the controller circuit in many cases can be drawn as follows: setting mismatch (error) control algorithm feedback control Such a circuit shows control by error (or deviation) This means that in order to the controller starts to act, it is necessary that the controlled value deviate from the set value The block marked with the sign finds a mismatch In the simplest case, the feedback signal (measured value) is subtracted from the set value Can the object be controlled so that there is no error? In real systems, none First of all, due to external influences and noises that are not known in advance. In addition, control objects have inertia, that is, they cannot instantly switch from one state to another. The capabilities of the controller and drives (that is, the power of the control signal) are always are limited, so the speed of the control system (the speed of transition to a new mode) is also limited. and the actual state of the control object. Such feedback is called negative, because the feedback signal is subtracted from the master signal. Can it be the other way around? It turns out that yes. In this case, the feedback is called positive, it increases the mismatch, that is, it tends to “shake” the system In practice, positive feedback is used, for example, in generators to maintain undamped electrical oscillations 4 Open-loop systems Is it possible to control without using feedback ? In principle, it is possible. In this case, the controller does not receive any information about the real state of the object, therefore, it must be known exactly how this object behaves. Only then can it be calculated in advance how it needs to be controlled (build the desired control program). that the task will be completed Such systems are called program control systems or open-loop systems, since information is not transmitted along a closed loop, but only in one direction. Program regulator control drive object external disturbances A blind and deaf driver can also drive a car correctly calculate his place Until pedestrians or other vehicles meet on the way, which he cannot know in advance. From this simple example, it is clear that without 6

7 Kyu Polyakov, 8 feedback (information from sensors) it is impossible to take into account the influence of unknown factors, the incompleteness of our knowledge Despite these shortcomings, open-loop systems are used in practice rotational speed However, from the point of view of control theory, open-loop systems are of little interest, and we will no longer remember about them. 3 What are control systems? An automatic system is a system that works without human intervention. There are also automated systems in which routine processes (collection and analysis of information) are performed by a computer, but the entire system is controlled by a human operator, who makes decisions. We will further study only automatic systems 3 Tasks of control systems Automatic control systems are used to solve three types of problems: stabilization, that is, maintaining a given operating mode that does not change for a long time (the setting signal is constant, often zero); program control control according to a previously known program (the setting signal changes, but is known in advance); tracking an unknown master signal Stabilization systems include, for example, autopilots on ships (maintaining a given course), turbine speed control systems. Program control systems are widely used in household appliances, for example, in washing machines. they are used in drives and when transmitting commands via communication lines, for example, via the Internet 3 One-dimensional and multidimensional systems By the number of inputs and outputs, there are one-dimensional systems that have one input and one output (they are considered in the so-called classical control theory); multidimensional systems that have several inputs and/or outputs (the main subject of study of modern control theory) We will study only one-dimensional systems, where both the object and the controller have one input and one output signal. one control action (rudder turn) and one adjustable value (heading) However, in fact, this is not entirely true. The fact is that when the course changes, the roll and trim of the ship also change. In a one-dimensional model, we neglect these changes, although they can be very significant For example, during a sharp turn, the roll can reach an unacceptable value. On the other hand, not only the steering wheel can be used for control, but also various thrusters, stabilizers, etc., that is, the object has several inputs. Thus, the real course control system is multidimensional systems is quite a difficult task and is beyond the scope of this tutorial. Therefore, in and in engineering calculations, sometimes they try to simplify a multidimensional system as several one-dimensional ones, and quite often this method leads to success. discrete, in which discrete signals (sequences of numbers) are used, determined only at certain points in time; 7

8 Kyu Polyakov, 8 continuous-discrete, in which there are both continuous and discrete signals Continuous (or analog) systems are usually described by differential equations These are all motion control systems in which there are no computers and other elements of discrete action (microprocessors, logical integrated circuits ) Microprocessors and computers are discrete systems, since all information is stored and processed in them in a discrete form. A computer cannot process continuous signals, since it only works with sequences of numbers. Examples of discrete systems can be found in economics (reference period quarter or year) and in biology ( model "predator-prey") To describe them, difference equations are used. There are also hybrid continuous-discrete systems, for example, computer systems for controlling moving objects (ships, planes, cars, etc.) In them, some of the elements are described by differential equations, and some by difference equations. dot and from the point of view of mathematics, this creates great difficulties for their study, therefore, in many cases, continuous-discrete systems are reduced to simplified purely continuous or purely discrete models. which all parameters remain constant are called stationary, which means "not changing in time" This tutorial deals only with stationary systems In practical problems, things are often not so rosy For example, a flying rocket consumes fuel and due to this its mass changes Thus, a rocket non-stationary object Systems in which the parameters of an object or controller change over time are called non-stationary. Although the theory of non-stationary systems exists (the formulas are written), it is not so easy to apply it in practice in the same way as external influences. In this case, we are talking about deterministic systems that were considered in the classical control theory. However, in real problems we do not have exact data. First of all, this applies to external influences. For example, to study the roll of a ship. at the first stage, we can assume that the wave has the shape of a sine of known amplitude and frequency. This is a deterministic model. Is this true in practice? Naturally, no. Using this approach, only approximate, rough results can be obtained. According to modern concepts, the waveform is approximately described as the sum of sinusoids that have random, that is, unknown in advance, frequencies, amplitudes, and phases. Interference, measurement noise, these are also random signals. Systems in which there are random perturbations or the parameters of the object can change randomly, are called stochastic (probabilistic) The theory of stochastic systems allows you to get only probabilistic results For example, it cannot be guaranteed that the ship’s deviation from the course will always be no more, but you can try to ensure such a deviation with a certain probability ( a probability of 99% means that the requirement will be met in 99 cases out of) It must be remembered that the expression “optimal system” does not mean that it is really ideal. Everything is determined by the accepted criterion; if it is chosen successfully, the system will turn out to be good, if not, then vice versa 8

9 Kyu Polyakov, 8 37 Special classes of systems If the parameters of an object or disturbances are known inaccurately or can change over time (in non-stationary systems), adaptive or self-adjusting controllers are used, in which the control law changes when conditions change. In the simplest case (when there are several previously known modes of operation) there is a simple switching between several control laws Often in adaptive systems, the controller evaluates the parameters of the object in real time and accordingly changes the control law according to a given rule , is called extreme (from the word extremum, meaning maximum or minimum) Many modern household devices (for example, washing machines) use fuzzy controllers built on the principles of fuzzy logic. solutions: “if the ship has gone too far to the right, the rudder should be shifted far to the left”

10 Kyu Polyakov, 8 Mathematical models What do you need to know for management? The goal of any control is to change the state of the object in the right way (in accordance with the task) The theory of automatic control should answer the question: “how to build a controller that can control this object in such a way as to achieve the goal?” To do this, the developer needs to know how the control system will respond to different influences, that is, a model of the system is needed: an object, drive, sensors, communication channels, disturbances, noise Model is an object that we use to study another object (original) Model and original should be somewhat similar so that the conclusions made when studying the model could (with some probability) be transferred to the original. We will be primarily interested in mathematical models expressed in the form of formulas. In addition, descriptive (verbal) ), graphic, tabular and other models Connection of input and output Any object interacts with the external environment using inputs and outputs Inputs are possible effects on the object, outputs are those signals that can be measured For example, for an electric motor, inputs can be supply voltage and load, and the outputs are the shaft rotation speed, temperature The inputs are independent, they “come” from the external environment When changing and information at the input, the internal state of the object changes (as its changing properties are called) and, as a result, the outputs: input x output y This means that there is some rule according to which the element transforms the input x into the output y This rule is called the Record operator y U[ x] means that the output y is obtained by applying the operator U to the input x To build a model, this means to find an operator that connects the inputs and outputs It can be used to predict the reaction of an object to any input signal Consider a DC motor The input of this object is the supply voltage (in volts), output speed (in revolutions per second) We will assume that at a voltage of V, the speed is equal to rpm, and at a voltage of V rpm, that is, the speed is equal in magnitude to the voltage It is easy to see that the action of such an operator can be write as U [ x] x Now suppose that the same engine rotates the wheel and as the output of the object we have chosen the number of revolutions of the wheel relative to about the initial position (at the moment t) In this case, with uniform rotation, the product x t gives us the number of revolutions during the time t, that is, y (x t (here the notation y (explicitly denotes the dependence of the output on time t) defined operator U? Obviously not, because the obtained dependence is valid only for a constant input signal If the voltage at the input x (changes (does not matter how!), The angle of rotation will be written as an integral U Of course, this will only be true in a certain range of voltages

11 Kyu Polyakov, 8 t U[ x] x(dt The operator that acts according to this rule is called the integration operator. Using this operator, you can, for example, describe the filling of an empty tank with water If the tank cross section S (in m) is constant over its entire height , then the water level h is defined as the integral of the water flow q (in m 3 /s) divided by S: h(q(dt, S The inverse operator of the differentiation operator calculates the derivative: dx(U[ x(] x& (dt As we we will see that this operator plays a very important role in the description of control objects. Usually, the differentiation operator is denoted by the letter p Record y (p x(outwardly looks like a “multiplication” of the operator p by the signal x (, but actually denotes the action of this operator, that is, differentiation: dx (p x(() dt Where do such operators occur? Here are examples from electrical engineering. For example, it is known that the current i (in amperes) passing through a circuit with a capacitor is proportional to the derivative of the potential difference u (in volts) on its plates: i du( i (C C p u(dt u Here C is the capacitance of the capacitor (measured in farads) In addition, the voltage drop u across the inductor is proportional to the derivative of the passing current i: i di(u (L L p i(dt u where L is the inductance (measured in henry) The differentiation operator is ideal (physically unrealizable) operator, it is impossible to implement in practice To understand this, let's remember that with an instantaneous change in the signal, its derivative (rate of increase) will be equal to infinity, and no real device can work with infinite signals 3 How are models built? Firstly, mathematical models can be obtained theoretically from the laws of physics (the laws of conservation of mass, energy, momentum) These models describe the internal connections in the object and, as a rule, are the most accurate. Consider an RLC circuit, that is, a series connection of a resistor with resistance R ( in ohms), an inductor with an inductance L and a capacitor with a capacitance C It can be described using two equations: u (R i (L C u c (t di(u(uc(L R i(dt duc(i(C dt First equation means that the potential difference at the ends of the RLC chain is equal to the sum of the potential differences at all intermediate sections

12 Kyu Polyakov, 8 store is calculated according to Ohm's law, and on the coil according to the formula given in the previous paragraph The second equation describes the relationship between voltage and current for a capacitor The input of this object is the voltage u (at the ends of the chain, and the output potential difference u c The second way to build a model as a result of observing an object with various input signals (this is what the theory of identification deals with) The object is considered as a "black box", that is, its internal structure is unknown. We look at how it reacts to input signals, and try to adjust the model in such a way that so that the outputs of the model and the object coincide as accurately as possible with various inputs. In practice, a mixed method is often used: the structure of the model (the type of equation connecting the input and output) is determined from theory, and the coefficients are found empirically. For example, the general form of the ship’s motion equations is well known, however in these equations there are coefficients that depend on many facts ditch (hull shape, surface roughness, etc.), so that it is extremely difficult (or impossible) to find them theoretically. In this case, to determine unknown coefficients, scale models are built and tested in pools using special methods. In the aircraft industry, wind tunnels are used for the same purposes. of any control object, it is possible to build many different models that will take into account (or not take into account) certain factors. Usually, at the first stage, they try to describe the object in as much detail as possible, draw up a detailed model. However, it will be difficult to theoretically calculate the control law that meets the specified requirements to the system Even if we can calculate it, it may turn out to be too complicated to implement or very expensive help can often achieve the desired However, in this case, there is no guarantee that it will control the complete model (and the real object) just as well. A compromise is usually used. Start with simple models, trying to design the controller so that it “fits” for a complex model. This property is called robustness. (roughness) of the controller (or system), it means insensitivity to modeling errors. Then, the operation of the constructed control law is checked on the full model or on a real object. If a negative result is obtained (a simple controller “does not work”), complicate the model by introducing additional details into it AND everything starts all over again 4 Linearity and non-linearity It is known from school mathematics that it is easiest to solve linear equations It is much more difficult to work with non-linear equations (quadratic, cubic, etc.), many types of equations mathematician is not yet able to solve analytically (exactly) Among operators, the simplest are also linear They have two properties: ie to a constant: U[ α x] α U[ x], where α is any constant (that is, if the input increases several times, the output increases by the same amount); superposition principle: if the sum of two signals is applied to the input, the output will be the sum of the reactions of the same operator to separate signals: U [ x x] U[ x ] U[ x] Models that are described by linear operators are called linear. You can work with them with using the methods of the theory of linear systems, which is the most developed and allows you to accurately solve most of the known practical problems. In mathematics, these properties are called homogeneity and additivity

13 Kyu Polyakov, 8 However, all models of real systems are nonlinear. This is easy to understand, if only because there is always a maximum allowable value of the input signal; if it is exceeded, the object can simply fail or even collapse (linearity is violated) Methods for studying nonlinear operators are very complex mathematically , in the theory of nonlinear systems, exact solutions are known only for a fairly narrow range of problems. There are still more “blank spots” than the results obtained, although this scientific direction has been actively developing in recent years. What to do? Most often, the linearization of the nonlinear model of the object (drive) is first carried out, that is, an approximate linear model is built. Then, based on this model, the control law is designed using the exact methods of the theory of linear systems. Finally, the obtained controller is checked using computer simulation on a complete nonlinear model. It should be noted that if the object or drive has a so-called "significant" non-linearity, this approach may not work. Then one has to use the methods of non-linear theory, as well as computer simulations. systems with various admissible input signals Thus, one more division should be added to the classification of control systems in section 3, perhaps the most significant systems are linear and nonlinear In linear systems, all links are described by linear opera 5 Linearization of equations You already know that in control theory the methods of studying linear systems are best developed. However, there are no strictly linear systems in the world around us. Therefore, in order for these methods to be applied in practice, it is necessary to perform linearization to build an approximate linear model based on a more realistic non-linear model of the object relates the water level in the tank h (in meters) and the flow rate of the outflowing water q (in m 3 / s) This relationship can be found using Bernoulli's law, which in this case takes the form ρ v ρ g h Here ρ is the density of the liquid (in kg / m 3), g 9.8 m/s free fall acceleration, v fluid outflow velocity (in m/s) is calculated as q S v, we find q α h, () where α S g is a constant value This is a static model, because it does not contain derivatives that characterize the change in signals over time the water level and the outflowing water flow are also constant h S S q 3

14 Kyu Polyakov, 8 Obviously, the model () is non-linear, since it contains h To linearize it means to approximately replace the equation () with a linear equation q k h, where k is some coefficient How to choose it? There is no unequivocal answer to this question. Suppose that the water level changes in the range from to m Then one of the options is to calculate the coefficient as the angle of inclination of the segment connecting the points of the curve q α h at the ends of this interval. For definiteness, we further take α everywhere, then we get k Of course, this model is very rough and gives a large error, especially for levels in the range from, to,6 To reduce the error, you can try to slightly change k (for example, increasing it to slightly better than in the first case q q k, k.77 k h 5 h tangent at the point (,5;), the slope of which is equal to the derivative dq k,77 dh h,5 h h,5 The tangent is a straight line with slope k passing through the point (,5;), its equation is q kh b b define m from the equation kh b,5 b b,354, 4 so we get the model q h (3) 4 This is a linear equation, but the model (3) is non-linear, since it does not, for example, have the property of multiplication by a constant This can be easily verified by comparing U [ h] and U[ h] : U [ h] h, U[ h] h U[ h] (h; q), in which we determined the slope of the tangent. It follows from (3) that 4

15 KYu Polyakov, 8 q q (h h) (4) 4 Since the graph of dependence (3) passes through the point (h ; q), we can apply the equality q h Then from (4) we find 4 q h (5) The equation thus obtained is a linear model object, written in deviations of the input and output from the nominal (working) point (h ; q) The approximate model (5) most closely matches the object near this point, and with large deviations from it, the error can increase significantly principles of linearization of nonlinear algebraic equations In the next paragraph, the same ideas are used for a more complex model that describes the dynamics of the system (change in time) dynamic models that are described by differential equations containing derivatives (rates of change of signals) As we have seen in section 3, such models can be derived from physical laws. In many cases, more or less accurate models are non-linear differential equations, so linearization is required in order to apply the theory of linear systems. This applies almost the same technique as for algebraic equations The idea of ​​linearization lies in the fact that in control systems (maintaining specified values ​​of quantities), the signals deviate little from the operating point of a certain equilibrium position, in which all signals have “correct” values ​​and their derivatives are equal to zero. Therefore, to solve control problems, it is often sufficient to use a linear model in deviations from this operating point The model just built for a water tank is not quite correct because it does not take into account that the level in the tank changes and decreases as water flows out. In addition, suppose that a pump is used to maintain the level, which pumps water into the tank, its consumption is denoted by cut Q For such an object, the input is the flow rate Q, and the output is a change in the level h. Suppose that over a small interval t, the flow rates Q and q can be considered constant. During this time, the volume of water added to the tank by the pump is Q t, and the volume of "gone" water q t Considering that the area of ​​the tank is equal to S, we get the level change: (Q q) h t Going to the limit at t, we get the differential equation S dh([ Q(q(] dt S This model takes into account that the water level and flow change in time Recall that the flow rate of the outflowing liquid q (depends on the water level in the tank h (and is associated with it by a nonlinear dependence q(α h(Therefore, the equation can be written as dh(α Q(h((6) dt S S 5

16 Kyu Polyakov, 8 Here only two changing quantities remain: the pump flow rate Q ((inlet of the object) and the water level h ((outlet) Further, to simplify the notation, we will not explicitly indicate the dependence of these signals on time In the steady (static) mode, when signals do not change, all derivatives are equal to zero In our case, assuming in (6), we obtain dh(dt Q Q α h h (7) α at the input to get the value of the output h Now suppose that a certain operating point is given, that is, the values ​​of the input Q Q and output h h satisfy equation (7), and the system works all the time around this equilibrium position Near this point Q Q Q and h h h, where Q and h small deviations of the input and output from the operating point Further, for linearization, the expansion of functions in a Taylor series is used. For some function f (x, y) in the vicinity of the point x,), this series has the form: (y f (x, y) f (x, y) f (x, y) f (x, y) x y F(x, y), x y f (x, y) f (x, y) where and x y are the partial derivatives of the function f (x, y) in x and y at the point (x, y), and F (x, y) depends on the higher derivatives at the same point (second, third, etc.) For small values ​​of x and y, we can assume that that the "tail" of this series F (x, y) is very small, approximately equal to zero, so f (x, y) f (x, y) f (x, y) f (x, y) x y (8) x y formula (8) for the linearization of the right side of equation (6), where the role of x is the flow rate Q, and the role of y is the level h Performing differentiation, we find α α α Q h, Q h Q S S S h S S S h Then using formula (8) we obtain α α α Q h Q h Q h S S S S S S h Substitute Q Q Q and h h h into equation (6) and take into account that d h α α Q h Q dt S S S S h h d(h h) d h Then dt dt α Recalling that Q and h correspond to the static mode, that is, Q h, we obtain a linearized equation in deviations from the operating point: S S d h kh h kq Q, (9) dt 6

17 Kyu Polyakov, 8 α where k h and k Q Note that the coefficient k h depends on h, that is, on the choice of the operating point. This manifests the non-linearity of the object S h S Usually, when writing a linearized equation, the sign (denoting the deviation) is not written we obtain a linearized model dh(kh h(kq Q(() dt) Let's look at an example of how an object can be controlled and what comes of it Let's slightly change the previous problem by allowing the flow of the outflowing liquid q to change independently (in control theory this is called the load on the object) In order to provide water to all the villagers, we built a water tower, which water is pumped from the river by a pump. Each resident can turn on the water on his plot at any time, for example, for irrigation. automatically maintains a given level h of water in the tank Q (in meters) We will assume that there are quite a lot of inhabitants, so someone always has water turned on and the pump constantly works to pump water into the tank To control the water level h, we can change its flow Q ( in m 3 /s) Thus, the level h is a controlled value, and the flow Q is a control signal c) shows how much water flows out of the tank in s this load The change in level h depends on the flow difference Q q and the cross-sectional area of ​​the tank S If the flow difference is constant over a time interval, in general, you need to use the integral: t t, then t В S h((Q(q() dt S) nominal (working point) For In order to obtain the equation in deviations, we represent the flows in the form Q(q Q(, q(q q(, where Q( and q(small deviations of the flows from the nominal mode Then, omitting the increment sign, we can write the model of the control object in the form t h( (Q(q() dt S 7

18 Kyu Polyakov, 8 Here h (, Q (and q (denote the deviations of these quantities from the nominal values). Note that this model can be written as a differential equation (if we find the derivatives of both parts of the equality): dh([ Q(q(] dt S For simplicity, we will further take S m As feedback, we will use the signal from the level sensor. The control error is calculated as the difference between the set and measured water levels: -regulator), which controls the flow according to the law q(K e(K [ h (h(]) The block diagram of the control system is shown in the figure below. the sector is painted black, the signal entering it is subtracted (taken into account in the sum with a minus sign). In addition to the signals that have already been discussed, the figure also shows the measurement noise m (, distorting sensor readings q controller object h e Q h K m Let's check the operation of this controller at different values ​​of the coefficient K First, we will assume that there is no measurement noise, that is, the level is measured accurately Suppose that the water flow at the outlet q increases abruptly (everyone started watering the gardens) Blue the line in the figure (see below) shows the change in level at K, and the green line at K 5 h K 5 t K Based on these data, we can draw some conclusions: when the load changes (water consumption, flow q), the regulator-amplifier cannot maintain a given level ( graphs do not come to the value of h); the larger K, the smaller the control error h in steady state; it can be expected that for K the error should decrease to zero; the more K, the faster the transition to the new mode ends It seems that to improve control it is necessary to increase K, but this is only a first impression Now let's see what happens if there is measurement noise (random sensor error) 8

19 KYu Polyakov, 8 h Q K 5 t K 5 K K t The graphs show that with inaccurate measurements, the level fluctuates around a certain average value (that which was obtained without noise), and at higher K, the fluctuations increase. This effect is especially clearly visible in the graph changes in pump flow rate q (figure on the right) With increasing K, the increase in accuracy (reduction of the steady-state error) is achieved due to the increased activity of the pump, which “twitches” all the time. At the same time, the mechanical parts wear out and its service life is significantly reduced. Therefore, the coefficient K cannot be greatly increased One From the main conclusions of this example: control is most often associated with a trade-off Here, on the one hand, you need to increase K to increase accuracy, and on the other hand, you need to decrease K to reduce the influence of measurement noise (P-regulator) The thoughtful reader should inevitably have questions of the following nature : can any object be controlled by a regulator-amplifier? how to choose the right coefficient K (what value to stop at)? Is it possible to achieve better control with a more complex controller? what regulator should be applied to improve control? how to ensure a zero steady-state error (constant level at any flow rate q) and is it possible to do this at all? how to suppress measurement noises so that they do not lead to pump “twitching”? The following sections present the basics of automatic control theory, which answers such questions and offers reliable methods for designing controllers that solve the control problem in accordance with given requirements 9

20 Kyu Polyakov, 8 3 Models of linear objects 3 Differential equations When composing a model of an object based on physical laws, we most often get a system of differential R equations of the first and second order For an example, we will show how to build a model of a DC motor using the laws of mechanics and electrical engineering object armature voltage u ((in ω e u volts), output angle of rotation of the shaft θ ((in radians) i First, let's recall some "worldly" knowledge about electric motors The motor shaft starts to rotate when the supply voltage is applied If the voltage does not change, the angular speed of rotation ω ((in radians per second) remains constant, while the angle θ (increases uniformly. The greater the voltage, the faster the shaft rotates. If you clamp the shaft by hand (or connect a load, for example, make the engine rotate the turbine), the rotation speed gradually decreases to a new value at which the motor torque will be equal to the moment of resistance (load ) As long as these moments are equal, the rotation speed remains constant and its derivative is equal to zero Now we translate these arguments into the strict language of mathematics θ (this is the integral of the angular velocity. In mechanics, the equation of rotational motion is usually written as dω(J M (M H (, dt where M (torque moment (measured in H m), M H (load torque (disturbance, also in H m) Letter J the total moment of inertia of the armature and the load is indicated (in kg m) The moment of inertia indicates how easy it is to “accelerate” the engine (the greater the moment of inertia, the more difficult it is to “accelerate”) Let's move on to electrical engineering In our case, the moment M (this is the electromagnetic moment of the engine , which is calculated by the formula M (CM Φ i(, where C M is the coefficient, Φ is the magnetic flux created by the excitation winding (measured in webers); i (armature current (in amperes), which can be found from the equation u(e(R i(, where e (electromotive force (EMF) of the armature (in volts) and R resistance of the armature circuit (in ohms) In turn, EMF is calculated in terms of magnetic flux and rotation frequency: e(Cω Φ ω(, where C ω is the coefficient Introducing new constants k C M Φ and k C ω Φ, we can write the motor model as a system of equations dω(dθ (J k i(M H (, e (k ω, ω (, u(e(R i(() dt dt Model () describes the connections of real signals in the system, its internal structure we are not very interested in the internal structure, that is, we

21 Kyu Polyakov, 8 we look at the object as a “black box” Substituting the second equation from the system () into the third one, we find i (and substitute it into the first equation Passing to the variable θ (, we get: d θ (k dθ (J u(k M (H dt R dt or, moving all terms depending on θ (, to the left side of the equation d θ (kk dθ (J k u(M (H () dt R dt) This is a second-order differential equation relating the input u (and the load M H (with output θ (Compared to the system (), all internal signals of the original model (e (and i ()) were excluded from the equations. Therefore, equation () is called the “input-output” equation. The order of the model is the order of the corresponding differential equation. In this case, we got the model second order In this section, using a simple example, we looked at how mathematical models of control objects are built on the basis of physical laws. As a rule, they are differential equations. In the future, we will use ready-made models of control objects, assuming that they were and obtained by someone earlier (for example, provided by the customer) 3 Models in the state space a system of differential equations of the first order, which is called the Cauchy normal form Consider again the model of the electric motor, assuming that M H ((no load) (ω(u(J R J R This system of first-order differential equations can be written in matrix form: & θ ((kk θ k u((J R ((3) & ω ω J R) means that knowing their values ​​at some point in time t and the input signal u (for all t< t) не играют никакой роли Поэтому θ (и ω (называются переменными состояния, а вектор θ (вектором состояния ω(В теории управления принято обозначать вектор состояния через x (, вход объекта (сигнал управления) через u (Тогда модель (3) может быть записана в виде x& (A x(B u((4) θ (где x(, A kk ω(и B k Модель (4) связывает вход u (и вектор состояния x (, поэтому она называется моделью J R J R вход-состояние

22 KYu Polyakov, 8 The complete state-space object model contains another equation, the output equation, which shows how the output of the object y (: x& (A x(B u((5) y(C x(D u(This model is called input-state-output model The output coordinate for a DC motor is the angle of rotation of the shaft: , by changing the matrices C and D, any linear combination of state and input variables can be taken as the output. In many practical problems, the output is one or more state variables that we can measure. time, matrices A, B, C and D in the model (5) are constant. Such objects are called stationary, in contrast to non-stationary objects, the parameters of which change in time. Writing models in a single form (5) allows you to abstract from the meaning of state variables and explore systems p different nature by standard methods that are well developed and implemented in modern computer programs Let's show how equations of the form (5) can be solved and why this form of writing is convenient Let's assume that we know the initial conditions, that is, the state vector x () at t Recall that knowledge of x () and input u (for all t > makes it possible to uniquely determine the further behavior of this object The first equation in (5) allows you to find the derivative, that is, the rate of change of the state vector x (at any time We assume that when t t, where t is a small time interval, this derivative does not change Then the value of the state vector at t t is approximately determined by the formula x(x() x& () t x() [ A x() B u() ] t, that is, it can be easily calculate Knowing x(and control signal u(, we find the system output at the same moment y(C x(D u(This technique can be applied further, at the end of the second interval we get x(x(x& (t x([ A x(B u(] t, y(C x(D u(Thus, one can (approximately) calculate the output of the system for all t > Of course, the accuracy will be the higher, the smaller t, however, the amount of calculations will also increase. This method for the approximate solution of differential equations is called the Euler method Since we did not make any assumptions about the constant matrices D, it (as well as other, more advanced methods) can be used without changes to solve any equations of the form (5) “single jump” (“single step signal”), that is, an instantaneous change in the input signal from to at time t Formally, this signal is defined as follows:, t< (, t


Lecture 3 Mathematical description of control systems In control theory, in the analysis and synthesis of control systems, they deal with their mathematical model. The mathematical model of the ACS is an equation

Lecture 14 Classification of ACS Non-adapting systems are the simplest systems that do not change their structure and parameters in the process of control. They are used for stationary control objects, for

FGBOU VPO "Omsk State Technical University" SECTION II CONTINUOUS LINEAR AUTOMATIC CONTROL SYSTEMS Lecture 5

A task. "Nonlinear trio" In this problem, "nonlinear resistors" are considered. The voltage U on such a resistor is proportional to the square of the current flowing through the resistor (observing the polarity):

4 Lecture 5 ANALYSIS OF DYNAMIC CIRCUITS Plan Equations of state of electrical circuits Algorithm for the formation of equations of state 3 Examples of compiling equations of state 4 Conclusions Equations of state of electrical

Lecture 5 Automatic regulators in control systems and their settings Automatic regulators with typical control algorithms relay, proportional (P), proportional-integral (PI),

Forced electrical oscillations. Alternating current Consider the electrical oscillations that occur when there is a generator in the circuit, the electromotive force of which changes periodically.

Topic: Laws of alternating current Electric current is called the ordered movement of charged particles or macroscopic bodies Variable current is called, which changes its value over time

Zaitsev G. F. Theory of automatic control and regulation Second edition, revised and supplemented Approved by the Ministry of Higher and Secondary Specialized Education of the USSR as a textbook

Lecture 6 Transformation of mathematical models of systems. transfer functions. Models in the form of signal graphs To study the properties of complex physical systems and learn how to control them, you must have

Equations of dynamics and statics. Linearization At a certain stage of development and research of an automatic control system, its mathematical description is obtained, a description of the processes occurring in the system

Chapter 1. Basic laws of the electrical circuit 1.1 Parameters of the electrical circuit An electrical circuit is a set of bodies and media that form closed paths for the flow of electric current. Usually physical

Robotics RAR1300 Sergei Pavlov TTÜ Virumaa Kolledž Drive control Control of the movement of a working machine or machine means control of the position, speed and acceleration of a system that

Basic terms and definitions The general theory of control, covering both inanimate and living nature, is the subject of the science of cybernetics. The theory of automatic control (TAU) is part of cybernetics.

Lecture 2 Linear automatic control systems Properties of linear systems Based on the study of many models of systems, we can conclude that systems described by linear differential equations,

Lecture 1 Basic concepts and definitions of the theory of automatic control. Management is a set of actions aimed at achieving the goal. Regulation is a special case of managing technical

4.1 Control questions for self-control 1 SECTION "Linear continuous models and characteristics of control systems" 1 What does control theory study? 2 Define the concepts of control and control object.

Test 1 in the discipline "Management of technical systems" Option 1 1. What is the functional purpose of the sensor in the control system? 1) regulate the parameters of the technological process; 2) suppress noise

Topic 4 .. AC circuits Topic questions .. AC circuit with inductance .. AC circuit with inductance and active resistance. 3. AC circuit with capacitance. 4. AC circuit

Appendix 4 Forced electrical oscillations Alternating current The following theoretical information may be useful in preparing for laboratory work 6, 7, 8 in the laboratory "Electricity and Magnetism"

Sinusoidal current "in the palm of your hand" Most of the electrical energy is generated in the form of EMF, which varies in time according to the law of a harmonic (sinusoidal) function. Harmonic EMF sources are

Lecture 1. Control theory main tasks, principles, classification B.1. Basic concepts and definitions Control system (CS) set of control device (CU) and control object (OC), actions

METHODOLOGICAL INSTRUCTIONS for homework at the TCB course Study of a nonlinear automatic control system DEFINITION OF INITIAL DATA Initial data for homework are given

Transitional processes "in the palm of your hand". You already know the methods for calculating a circuit that is in a steady state, that is, in one where the currents, as well as the voltage drops on individual elements, are unchanged over time.

Classification of ulators according to the realized ulirovanie law 1st type. Proportional or P-switch with one setting. Its transfer function is the same as the transfer function of the proportional

Laboratory work 23 Forced oscillations in an oscillatory circuit The purpose of the work: to experimentally investigate the dependence of the voltage on a capacitor in an electromagnetic series oscillatory circuit

Chapter II Construction of a control system model A real control system consists of a certain number of interconnected instruments and devices, including, of course, the control object, which have different

FGBOU VPO "Omsk State Technical University" SECTION II CONTINUOUS LINEAR AUTOMATIC CONTROL SYSTEMS Lecture 4. DYNAMIC LINKS. GENERAL CONCEPTS, TIME CHARACTERISTICS AND FREQUENCY

54 Lecture 5 FOURIER TRANSFORM AND SPECTRAL METHOD OF ANALYSIS OF ELECTRIC CIRCUITS Plan Spectra of aperiodic functions and Fourier transform Some properties of the Fourier transform 3 Spectral method

I. V. Yakovlev Physics materials MathUs.ru Electromagnetic oscillations Topics of the USE codifier: free electromagnetic oscillations, oscillatory circuit, forced electromagnetic oscillations, resonance,

SEMINAR Basic concepts. Compilation (derivation) of a differential equation. The concept of solving a differential equation. Solution by the method of separable variables. Solving a linear differential equation

Helicopter flight altitude control Let's consider the problem of synthesis of the control system for the motion of the center of mass of the helicopter in height. A helicopter as an object of automatic control is a system with several

Laboratory work 1 1 DYNAMIC CHARACTERISTICS OF TYPICAL LINKS 1. The purpose of the work

Solution of systems of nonlinear equations in partial derivatives with time derivative of the first order with coefficients depending on time EG Yakubovsky e-i [email protected] Annotation In the article [ the decision was received

Electromagnetic oscillations Basic theoretical information Harmonic oscillations in an oscillatory circuit An example of an electrical circuit in which free electrical oscillations can occur is

Laboratory work 8 Forced oscillations in a series oscillatory circuit The purpose of the work: to study the amplitude-frequency and phase-frequency dependences of the voltage across the capacitor in series

Moscow State University M.V. Lomonosov Moscow State University Faculty of Physics Department of General Physics Laboratory p ractice in general physics (electricity and magnetism) .M.Bukhanov,

1 INTRODUCTION Basic concepts and definitions of electrical engineering To calculate the mode in a given electrical circuit means to obtain by calculation the current, voltage, power in its individual sections of interest to us

1 Laboratory work 3 b FORCED OSCILLATIONS IN THE OSCILLATIONAL CIRCUIT

Laboratory work 1 1 TYPICAL UNITS OF ACS 1. The purpose of the work

Autumn semester of the academic year Topic 3 HARMONIC ANALYSIS OF NON-PERIODIC SIGNALS Direct and inverse Fourier transforms Spectral characteristic of the signal Amplitude-frequency and phase-frequency spectra

4. MEMBRANE TRANSITION RESPONSE 4.1 Time characteristics of the dynamic system

Topic 3 HARMONIC ANALYSIS OF NON-PERIODIC SIGNALS Direct and inverse Fourier transforms Spectral response of the signal Amplitude-frequency and phase-frequency spectra Spectral characteristics

Federal Agency for Education TVER STATE TECHNICAL UNIVERSITY Department of Technological Process Automation

Fulfilled by: Accepted by: Umarov D. 1-14 IKSUTP Abdurakhmanova M.I. Stability Analysis of ACS The practical suitability of control systems is determined by their stability and acceptable quality of control. Under

Chapter 2. ELECTROMECHANICAL AND ADJUSTING PROPERTIES OF DC ELECTRIC DRIVES 2.1. Mechanical characteristics of electric motors and operating mechanisms Mechanical characteristics of the electric motor

Lecture 8 33 ONE-DIMENSIONAL STATIONARY SYSTEMS APPLICATION OF THE FOURIER TRANSFORMATION 33 Description of signals and systems Description of signals To describe deterministic signals, the Fourier transform is used: it

Klyuev O.V., Sadovoy A.V. Sokhina Yu.V. Dneprodzerzhinsk State Technical University Ukraine Dneprodzerzhinsk IDENTIFICATION OF COORDINATES AND PARAMETERS OF ASYNCHRONOUS MACHINE UNDER VECTOR CONTROL SOFTWARE

ELECTRICAL ENGINEERING AND POWER ENGINEERING 79 UDC 004.0:6.3.078 DEVELOPMENT OF A SOFTWARE PRODUCT FOR IDENTIFICATION OF PARAMETERS OF THE SUBSTITUTION CIRCUIT OF THE KEY ELEMENT AND SYNTHESIS OF THE CORRECTIVE LINK V. M. LUKASHOV, S. N. KUKHARENKO,

Work 13 Study of dependences T(l) and A(t) of a mathematical pendulum Equipment: tripod, pendulum, ruler, electronic counter-stopwatch Description of the method The graphical method is the simplest

Federal State Autonomous Educational Institution of Higher Professional Education "National Research Technological University "MISiS" Entrance Exam PROGRAM

TAU Practical exercises Assignments for control work and guidelines for its implementation Practical lesson AFCH, LAH, transient and weight characteristics of typical dynamic links Most

Impact Accuracy. Static accuracy with harmonic input. The simplest method to learn accuracy is to use the transfer function by mistake. () (). U; (

4 Lecture 1. BASIC CONCEPTS AND ELEMENTS OF ELECTRIC CIRCUITS Plan 1. Introduction. 2. Electrical quantities and units of their measurement. 3. Bipolar elements of electrical circuits. 4. Managed (dependent)

Task 1 Vasya has two absolutely identical dynamometers with very light springs and massive cases. These dynamometers are not calibrated, but both have scales with a linear reading versus stretch.

Table of contents Preface 9 Introduction 11 SECTION 1 Linear automatic control systems 19 1. Drawing up equations of motion of ACS elements and methods for their solution 19 1.1. Mathematical description of elements

Chapter 2. Methods for calculating transient processes. 2.1. Classical calculation method. Theoretical information. In the first chapter, methods for calculating the circuit in steady state were considered, that is,

UDC: 62-971 SYSTEM OF AUTOMATIC CONTROL OF THE TEMPERATURE OF THE FURNACE FOR BRICKING student gr.139114 Lappo I.A. Scientific adviser Chigarev V.A. Belarusian National Technical University Minsk,

Topic 8 LINEAR DISCRETE SYSTEMS The concept of a discrete system Methods for describing linear discrete systems: difference equation, transfer function, impulse response, frequency transfer function

Work 11 STUDY OF FORCED OSCILLATIONS AND THE PHENOMENA OF RESONANCE IN THE OSCILLATORY CIRCUIT In a circuit containing an inductor and a capacitor, electrical oscillations can occur. The work studies