\IEEEPARstart{T}{his} this paper is intended to sum up the research done in order to understand the Dynamics in electrical systems and their underlying differential equations.
\textbf{Dynamical systems} are mathematical objects used to model physical phenomena whose state (or instantaneous description) changes over time \cite{katok1997introduction}. These models are used in financial and economic forecasting, environmental modeling, medical diagnosis, industrial equipment diagnosis, and a host of other applications.
For the most part, applications fall into three broad categories: predictive (also referred to as generative), in which the objective is to predict future states of the system from observations of the past and present states of the system, diagnostic, in which the objective is to infer what possible past states of the system might have led to the present state of the system (or observations leading up to the present state), and, finally, applications in which the objective is neither to predict the future nor explain the past but rather to provide a theory for the physical phenomena. These three categories correspond roughly to the need to predict, explain, and understand physical phenomena.
A \textbf{differential equation} is any equation which contains derivatives, either ordinary derivatives or partial derivatives. Almost every physical situation that occurs in nature can be \textit{described} with an appropriate differential equation.
The \textbf{order} or the differential equation is the highest derivative contained within it. \textbf{Degree} is the exponent on that highest derivative.
There are multiple ways to solve differential equations. From the numerical ones, notable are Euler's method and Runge-Kutta (RK4). Some other are described briefly in the following sections.
Differential equations fall to two groups - \textit{ordinary differential equations} (PDE) and \textit{partial differential equations}. Our study won't go into further detail about PDE and will stay focused mainly on ODE.
Understanding \textbf{direction fields} (or \textbf{slope fields)} and what they tell us about a differential equation and its solution is important and can be introduced without any knowledge of how to solve a differential equation and so can be done before the getting to actually solving them.
The direction fields are important because they can provide a \textit{sketch of solution}, if exist, and a \textit{long term behavior} - most of the time we are interested in general picture about what is happening, as the time passes.
The \textbf{Laplace transform} is an integral transform perhaps second only to the Fourier transform in its utility in solving physical problems. The Laplace transform \eqref{eq:lpl} is particularly useful in solving linear ordinary differential equations such as those arising in the analysis of electronic circuits. The Laplace transform $\mathcal{L}$
Most important properties of Laplace transform is that differentiation and integration become multiplication and division. The transform turns integral equations and differential equations to polynomial equations, which are much easier to solve \cite{schiff2013laplace}. Once solved, use of the inverse Laplace transform reverts to the time domain.
A periodic orbit corresponds to a special type of solution for a dynamical system, namely one which repeats itself in time. A dynamical system exhibiting a stable periodic orbit is often called an \textit{oscillator}.
A \textbf{limit cycle} is an isolated closed trajectory. \textit{Isolated} means that neighboring trajectories are not closed - they spiral either towards or away from the limit cycle. The particle on the limit cycle, appears after one period on the exact same spot. Limit cycle appears on on a plane, opposed to a periodic orbit, that happens to be a vector.
If all neighboring trajectories approach the limit cycle, we say the limit cycle is \textbf{stable} or \textit{attracting}, as shown on \cref{f:lc_st}. Otherwise the limit cycle is \textbf{unstable}, or in exceptional cases, \textbf{half-stable}. Stable limit cycles are very important scientifically as they model systems that exhibit self-sustained oscillations. In other words, these systems oscillate even in the absence of external periodic forcing.
Of the countless examples that could be given, we mention only a few: the beating of a heart; the periodic ring of a pace maker neuron; daily rhythms in human body temperature and hormone secretion; chemical reactions that oscillate spontaneously; and dangerous self-excited vibrations in bridges and airplane wings. In each case, there is a standard oscillation of some preferred period, waveform, and amplitude. Oscillations are important part of electronics \cite{oscillations}, too.
If the system is perturbed slightly, it always returns to the standard cycle. Limit cycles are inherently nonlinear phenomena; they cant occur in linear systems \cite{strogatz2008nonlinear}.
Mentioning damping is important mainly because, in a real world, oscillations eventually stop, due to Newton's law of Thermodynamics (the frictional force). In electronics, there is no ideal oscillator, too - small amount of energy is lost every cycle, due to electric resistance.
Generally, the damping is linear either linear or nonlinear. As a rule of thumb, the linear one is easily modeled mathematically, obeying known rules, while the nonlinear one is not \cite{institute1989estimation}. Nonlinear damping is advantageous in multiple cases and the research is still ongoing about this topic.
There are four exclusive states, that damping in a system can be in:
This equation describes the dynamics of a system with one degree of freedom in the presence of a linear restoring force and nonlinear damping. The function $f$ has properties
that is, if for small amplitudes the system absorbs energy and for large amplitudes dissipation occurs, then in the system one can expect self-exciting oscillations.
\textbf{Li\'{e}nard equation} was intensely studied as it can be used to model oscillating circuits. Under certain additional assumptions Li\'{e}nard's theorem guarantees the uniqueness and existence of a limit cycle for such a system.
One of the most well-known oscillator model in dynamics is \textbf{Van der Pol oscillator}, which is a special case of Li\'{e}nard's equation \eqref{eq:lnrd} and is described by a differential equation
The $\mu$ parameter determines the shape of the limit cycle. As it approaches 0, it gets the shape of a circle. On the other hand, increasing the paramter, involves sharpening of the curves.
The Van der Pol equation \eqref{eq:vdp} arises in the study of circuits containing vacuum tubes (triode) and is derived from earlier, Rayleigh equation \cite{nahin2001science}, known also as Rayleigh-Plesset equation - an ordinary differential equation explaining the dynamics of a spherical bubble in an infinite body of liquid.
Van der Pol oscillator is \textbf{self-sustainable}, \textbf{relaxation} oscillator. Self-sustainability in this context means, that the energy is fed into small oscillations and removed from large oscillations. Relaxation means, that the energy is gradually accumulating over time and then quickly released (relaxed). In electronics jargon, the relaxation oscillator is also called a \textit{free-running} oscillator. As already explained, it does not require neither one (monostable), nor two (bistable) inputs for transitioning between states, it "runs" by itself, thus free-running.
Li\'{e}nard's theorem can be used to prove that the system described by Van der Pol equation \eqref{eq:vdp} has a limit cycle \cite{sternberg2014dynamical}. If we want to visualize it, the one-dimensional form of equation must be first \textit{transformed} to the two-dimensional form. Applying the Li\'{e}nard transformation $$y=x-\frac{x^3}{3}-\frac{\dot x}{\mu}$$ where dot indicates the time derivative, the system can be written in it's two-dimensional form \cite{kaplan2012understanding}:
\begin{align*}
\dot x &= \mu\left(x-\frac13 x^3 -y\right) \\
\dot y &= \frac{1}{\mu} x
\end{align*}
However, this form is not well-known. Far common form uses the transformation $y=\dot x$, that yields
\begin{align*}
\dot x &= y \\
\dot y &= \mu\left(1-x^2\right)y-x
\end{align*}
which can be plotted onto direction field, as shown on \cref{f:vdp_m}. It is possible to see the stable limit cycle as well as trajectories from both sides attracted towards it.
\caption{Phase portrait of the unforced Van der Pol oscillator, showing a limit cycle and the direction field Parameter $\mu=1$. The wxMaxima computing software was used for this purpose. }
Another phenomenon regarding nonlinear dynamics applied in the field of electrical engineering is known as Josephson Junction.
\textbf{Josephson junctions} are superconducting devices that are capable of generating voltage oscillations of extraordinary high frequency, typically 10\textsuperscript{10} - 10\textsuperscript{11} cycles per second \cite{van1981principles}. They consist of two superconducting layers, separated by a very thin insulator that weakly couples them, as shown on \cref{f:jjunc}.
\begin{figure}[ht!]
\centering
\includegraphics[width=.4\linewidth]{jjunc}
\caption{The physical structure of a Josephson Junction. Shown for ilustration purposes.}
\label{f:jjunc}
\end{figure}
Although quantum mechanics is required to explain the origin of the Josephson effect, we can nevertheless dive into dynamics of Josephson junctions in classical terms. They have been particularly useful for \textit{experimental} studies of nonlinear dynamics, because the equation governing a single junction resembles the one of a pendulum \cite{strogatz1994nonlinear}.
Josephson junctions are used to detect extremely low electric potentials and are used for instance, to detect far-infrared radiation from distant galaxies. They are also formed to arrays, because there is a great potential seen in this configuration, however, all the effects are yet to be fully understood.
This is the main section of our work. We will investigate, what is the behavior of electrical components in circuits with respect to time and model them with differential equations.
\textbf{Resistor} is a linear component. It is described by an \textbf{Ohm's law}, which states, that the voltage $V$ across it is proportional to the current $I$ passing through it's resistance $R$.
$$V=IR$$
\textbf{Inductor} is one nonlinear component. It produces a voltage drop, that is proportional to the \textit{rate of change} of the current through it, as described by \textbf{Faraday's Law}
\textbf{Capacitor} is another nonlinear component. Voltage drop across it, is on the other hand proportional to the charge stored in it. This behavior is derived from \textbf{Coulumb's law}
The RL circuit shown on \cref{f:rl} has a resistor and an inductor connected in series. A \textit{constant} voltage $V$ is applied when the switch is closed.
If the applied voltage is not constant, but rather \textit{variable}, in the form of $V(t)=A\,cos(\omega(t)+\phi)$ or $V(t)=A\,sin(\omega(t)+\phi)$, then things get complex.
The RC circuit shown on \cref{f:rc} has a resistor and unexpectedly, a capacitor connected in series. Again, A \textit{constant} voltage $V$ is applied when the switch is closed.
Second order circuits contain both nonlinear elements. A RLC circuit consist of the resistor, the inductor and the capacitor and is shown in \cref{f:rlc}.
In order to follow further, we must define new term, that will be used. \textbf{Electro-motive force} (EMF) is the force, that moves electrons from lower potential to the higher one, as opposed to so far mentioned electric potential, that can do it only in reverse order. The source of EMF can be for instance chemical reaction in cell battery, that induces the \textit{separation of charge}.
The EMF is mentioned, because nonlinear elements (capacitor and inductor) store and release energy, as well as cells do. Thus they have the ability to move electrons from one potential to another and so they have to be described in terms of electro-magnetic force, instead of just electric potential.
which is the second order linear differential equation (homogenous).
The circuit itself is a damped oscillator. Writing the equation in its auxiliary form and finding its roots, we could obtain a formula for it's \textit{damping factor}, however, it is a topic far off the boundaries of this work.
If the non-constant (variable) driving force, the things get even more complicated. For instance, Laplace transform can be used to solve the equations.
We have dived into multiple mathematical theories and science fields that has some connection to dynamical systems and electrical circuits. The topics mentioned are only scratch on the surface. Every single one could be written in separate paper or even a book. In fact, thousands of book have been written in mentioned topics and there are probably thousands more to come.
There are multiple important topics, namely \textit{bifurcations} and \textit{chaos}, but we decided to skip them, because of the vast amount of information, they represent and also due to lack of direct connection between them and electical circuits.
The main goal was to get some general idea about the connections between the terms and get some picture of the problematic. Although probably in a chaotic way, that goal was met and we tried to be as concise as possible along the way.
The authors would like to thank professor Carlos Par\'{e}s for having patience with them. Another thank would go to the well-written book \emph{Nonlinear Dynamics and Chaos, S.H. Strogatz, 2008} for introduction to and for sparking curiosity in the field of Dynamical Systems.