$$
\newcommand{\LetThereBe}[2]{\newcommand{#1}{#2}}
\newcommand{\letThereBe}[3]{\newcommand{#1}[#2]{#3}}
% Declare mathematics (so they can be overwritten for PDF)
\newcommand{\declareMathematics}[2]{\DeclareMathOperator{#1}{#2}}
\newcommand{\declareMathematicsStar}[2]{\DeclareMathOperator*{#1}{#2}}
% striked integral
\newcommand{\avint}{\mathop{\mathchoice{\,\rlap{-}\!\!\int}
{\rlap{\raise.15em{\scriptstyle -}}\kern-.2em\int}
{\rlap{\raise.09em{\scriptscriptstyle -}}\!\int}
{\rlap{-}\!\int}}\nolimits}
% \d does not work well for PDFs
\LetThereBe{\d}{\differential}
$$
$$
% Simply for testing
\LetThereBe{\foo}{\textrm{FIXME: this is a test!}}
% Font styles
\letThereBe{\mcal}{1}{\mathcal{#1}}
\letThereBe{\chem}{1}{\mathrm{#1}}
% Sets
\LetThereBe{\C}{\mathbb{C}}
\LetThereBe{\R}{\mathbb{R}}
\LetThereBe{\Z}{\mathbb{Z}}
\LetThereBe{\N}{\mathbb{N}}
\LetThereBe{\im}{\mathrm{i}}
\LetThereBe{\Im}{\mathrm{Im}}
\LetThereBe{\Re}{\mathrm{Re}}
% Sets from PDEs
\LetThereBe{\boundary}{\partial}
\letThereBe{\closure}{1}{\overline{#1}}
\letThereBe{\contf}{2}{C^{#2}(#1)}
\letThereBe{\compactContf}{2}{C_c^{#2}(#1)}
\letThereBe{\ball}{2}{B\brackets{#1, #2}}
\letThereBe{\closedBall}{2}{B\parentheses{#1, #2}}
\LetThereBe{\compactEmbed}{\subset\subset}
\letThereBe{\inside}{1}{#1^o}
\LetThereBe{\neighborhood}{\mcal O}
\letThereBe{\neigh}{1}{\neighborhood \brackets{#1}}
% Basic notation - vectors and random variables
\letThereBe{\vi}{1}{\boldsymbol{#1}} %vector or matrix
\letThereBe{\dvi}{1}{\vi{\dot{#1}}} %differentiated vector or matrix
\letThereBe{\vii}{1}{\mathbf{#1}} %if \vi doesn't work
\letThereBe{\dvii}{1}{\vii{\dot{#1}}} %if \dvi doesn't work
\letThereBe{\rnd}{1}{\mathup{#1}} %random variable
\letThereBe{\vr}{1}{\mathbf{#1}} %random vector or matrix
\letThereBe{\vrr}{1}{\boldsymbol{#1}} %random vector if \vr doesn't work
\letThereBe{\dvr}{1}{\vr{\dot{#1}}} %differentiated vector or matrix
\letThereBe{\vb}{1}{\pmb{#1}} %#TODO
\letThereBe{\dvb}{1}{\vb{\dot{#1}}} %#TODO
\letThereBe{\oper}{1}{\mathsf{#1}}
% Basic notation - general
\letThereBe{\set}{1}{\left\{#1\right\}}
\letThereBe{\seqnc}{4}{\set{#1_{#2}}_{#2 = #3}^{#4}}
\letThereBe{\Seqnc}{3}{\set{#1}_{#2}^{#3}}
\letThereBe{\brackets}{1}{\left( #1 \right)}
\letThereBe{\parentheses}{1}{\left[ #1 \right]}
\letThereBe{\dom}{1}{\mcal{D}\, \brackets{#1}}
\letThereBe{\complexConj}{1}{\overline{#1}}
% Special symbols
\LetThereBe{\const}{\mathrm{const}}
\LetThereBe{\konst}{\mathrm{konst.}}
\LetThereBe{\vf}{\varphi}
\LetThereBe{\ve}{\varepsilon}
\LetThereBe{\tht}{\theta}
\LetThereBe{\Tht}{\Theta}
\LetThereBe{\after}{\circ}
\LetThereBe{\lmbd}{\lambda}
% Shorthands
\LetThereBe{\xx}{\vi x}
\LetThereBe{\yy}{\vi y}
\LetThereBe{\XX}{\vi X}
\LetThereBe{\AA}{\vi A}
\LetThereBe{\bb}{\vi b}
\LetThereBe{\vvf}{\vi \vf}
\LetThereBe{\ff}{\vi f}
\LetThereBe{\gg}{\vi g}
% Basic functions
\letThereBe{\absval}{1}{\left| #1 \right|}
\LetThereBe{\id}{\mathrm{id}}
\letThereBe{\floor}{1}{\left\lfloor #1 \right\rfloor}
\letThereBe{\ceil}{1}{\left\lceil #1 \right\rceil}
\declareMathematics{\im}{im} %image
\declareMathematics{\tg}{tg}
\declareMathematics{\sign}{sign}
\declareMathematics{\card}{card} %cardinality
\letThereBe{\setSize}{1}{\left| #1 \right|}
\declareMathematics{\exp}{exp}
\letThereBe{\Exp}{1}{\exp\brackets{#1}}
\letThereBe{\indicator}{1}{\mathbb{1}_{#1}}
\declareMathematics{\arccot}{arccot}
\declareMathematics{\complexArg}{arg}
\declareMathematics{\gcd}{gcd} % Greatest Common Divisor
\declareMathematics{\lcm}{lcm} % Least Common Multiple
\letThereBe{\limInfty}{1}{\lim_{#1 \to \infty}}
\letThereBe{\limInftyM}{1}{\lim_{#1 \to -\infty}}
% Useful commands
\letThereBe{\onTop}{2}{\mathrel{\overset{#2}{#1}}}
\letThereBe{\onBottom}{2}{\mathrel{\underset{#2}{#1}}}
\letThereBe{\tOnTop}{2}{\mathrel{\overset{\text{#2}}{#1}}}
\LetThereBe{\EQ}{\onTop{=}{!}}
\LetThereBe{\letDef}{:=} %#TODO: change the symbol
\LetThereBe{\isPDef}{\onTop{\succ}{?}}
\LetThereBe{\inductionStep}{\tOnTop{=}{induct. step}}
% Optimization
\declareMathematicsStar{\argmin}{argmin}
\declareMathematicsStar{\argmax}{argmax}
\letThereBe{\maxOf}{1}{\max\set{#1}}
\letThereBe{\minOf}{1}{\min\set{#1}}
\declareMathematics{\prox}{prox}
\declareMathematics{\loss}{loss}
\declareMathematics{\supp}{supp}
\letThereBe{\Supp}{1}{\supp\brackets{#1}}
\LetThereBe{\constraint}{\text{s.t.}\;}
$$
$$
% Operators - Analysis
\LetThereBe{\hess}{\nabla^2}
\LetThereBe{\lagr}{\mcal L}
\LetThereBe{\lapl}{\Delta}
\declareMathematics{\grad}{grad}
\declareMathematics{\Dgrad}{D}
\LetThereBe{\gradient}{\nabla}
\LetThereBe{\jacobi}{\nabla}
\LetThereBe{\Jacobi}{\mathrm J}
\letThereBe{\jacobian}{2}{D_{#1}\brackets{#2}}
\LetThereBe{\d}{\mathrm{d}}
\LetThereBe{\dd}{\,\mathrm{d}}
\letThereBe{\partialDeriv}{2}{\frac {\partial #1} {\partial #2}}
\letThereBe{\npartialDeriv}{3}{\partialDeriv{^{#1} #2} {#3^{#1}}}
\letThereBe{\partialOp}{1}{\frac {\partial} {\partial #1}}
\letThereBe{\npartialOp}{2}{\frac {\partial^{#1}} {\partial #2^{#1}}}
\letThereBe{\pDeriv}{2}{\partialDeriv{#1}{#2}}
\letThereBe{\npDeriv}{3}{\npartialDeriv{#1}{#2}{#3}}
\letThereBe{\deriv}{2}{\frac {\d #1} {\d #2}}
\letThereBe{\nderiv}{3}{\frac {\d^{#1} #2} {\d #3^{#1}}}
\letThereBe{\derivOp}{1}{\frac {\d} {\d #1}\,}
\letThereBe{\nderivOp}{2}{\frac {\d^{#1}} {\d #2^{#1}}\,}
$$
$$
% Linear algebra
\letThereBe{\norm}{1}{\left\lVert #1 \right\rVert}
\letThereBe{\scal}{2}{\left\langle #1, #2 \right\rangle}
\letThereBe{\avg}{1}{\overline{#1}}
\letThereBe{\Avg}{1}{\bar{#1}}
\letThereBe{\linspace}{1}{\mathrm{lin}\set{#1}}
\letThereBe{\algMult}{1}{\mu_{\mathrm A} \brackets{#1}}
\letThereBe{\geomMult}{1}{\mu_{\mathrm G} \brackets{#1}}
\LetThereBe{\Nullity}{\mathrm{nullity}}
\letThereBe{\nullity}{1}{\Nullity \brackets{#1}}
\LetThereBe{\nulty}{\nu}
% Linear algebra - Matrices
\LetThereBe{\tr}{\top}
\LetThereBe{\Tr}{^\tr}
\LetThereBe{\pinv}{\dagger}
\LetThereBe{\Pinv}{^\dagger}
\LetThereBe{\Inv}{^{-1}}
\LetThereBe{\ident}{\vi{I}}
\letThereBe{\mtr}{1}{\begin{pmatrix}#1\end{pmatrix}}
\letThereBe{\bmtr}{1}{\begin{bmatrix}#1\end{bmatrix}}
\declareMathematics{\trace}{tr}
\declareMathematics{\diagonal}{diag}
$$
$$
% Statistics
\LetThereBe{\iid}{\overset{\text{i.i.d.}}{\sim}}
\LetThereBe{\ind}{\overset{\text{ind}}{\sim}}
\LetThereBe{\condp}{\,\vert\,}
\letThereBe{\complement}{1}{\overline{#1}}
\LetThereBe{\acov}{\gamma}
\LetThereBe{\acf}{\rho}
\LetThereBe{\stdev}{\sigma}
\LetThereBe{\procMean}{\mu}
\LetThereBe{\procVar}{\stdev^2}
\declareMathematics{\variance}{var}
\letThereBe{\Variance}{1}{\variance \brackets{#1}}
\declareMathematics{\cov}{cov}
\declareMathematics{\corr}{cor}
\letThereBe{\sampleVar}{1}{\rnd S^2_{#1}}
\letThereBe{\populationVar}{1}{V_{#1}}
\declareMathematics{\expectedValue}{\mathbb{E}}
\declareMathematics{\rndMode}{Mode}
\letThereBe{\RndMode}{1}{\rndMode\brackets{#1}}
\letThereBe{\expect}{1}{\expectedValue #1}
\letThereBe{\Expect}{1}{\expectedValue \brackets{#1}}
\letThereBe{\expectIn}{2}{\expectedValue_{#1} #2}
\letThereBe{\ExpectIn}{2}{\expectedValue_{#1} \brackets{#2}}
\LetThereBe{\betaF}{\mathrm B}
\LetThereBe{\fisherMat}{J}
\LetThereBe{\mutInfo}{I}
\LetThereBe{\expectedGain}{I_e}
\letThereBe{\KLDiv}{2}{D\brackets{#1 \parallel #2}}
\LetThereBe{\entropy}{H}
\LetThereBe{\diffEntropy}{h}
\LetThereBe{\probF}{\pi}
\LetThereBe{\densF}{\vf}
\LetThereBe{\att}{_t} %at time
\letThereBe{\estim}{1}{\hat{#1}}
\letThereBe{\estimML}{1}{\hat{#1}_{\mathrm{ML}}}
\letThereBe{\estimOLS}{1}{\hat{#1}_{\mathrm{OLS}}}
\letThereBe{\estimMAP}{1}{\hat{#1}_{\mathrm{MAP}}}
\letThereBe{\predict}{3}{\estim {\rnd #1}_{#2 | #3}}
\letThereBe{\periodPart}{3}{#1+#2-\ceil{#2/#3}#3}
\letThereBe{\infEstim}{1}{\tilde{#1}}
\letThereBe{\predictDist}{1}{{#1}^*}
\LetThereBe{\backs}{\oper B}
\LetThereBe{\diff}{\oper \Delta}
\LetThereBe{\BLP}{\oper P}
\LetThereBe{\arPoly}{\Phi}
\letThereBe{\ArPoly}{1}{\arPoly\brackets{#1}}
\LetThereBe{\maPoly}{\Theta}
\letThereBe{\MaPoly}{1}{\maPoly\brackets{#1}}
\letThereBe{\ARmod}{1}{\mathrm{AR}\brackets{#1}}
\letThereBe{\MAmod}{1}{\mathrm{MA}\brackets{#1}}
\letThereBe{\ARMA}{2}{\mathrm{ARMA}\brackets{#1, #2}}
\letThereBe{\sARMA}{3}{\mathrm{ARMA}\brackets{#1}\brackets{#2}_{#3}}
\letThereBe{\SARIMA}{3}{\mathrm{ARIMA}\brackets{#1}\brackets{#2}_{#3}}
\letThereBe{\ARIMA}{3}{\mathrm{ARIMA}\brackets{#1, #2, #3}}
\LetThereBe{\pacf}{\alpha}
\letThereBe{\parcorr}{3}{\rho_{#1 #2 | #3}}
\LetThereBe{\noise}{\mathscr{N}}
\LetThereBe{\jeffreys}{\mathcal J}
\LetThereBe{\likely}{\mcal L}
\letThereBe{\Likely}{1}{\likely\brackets{#1}}
\LetThereBe{\loglikely}{\mcal l}
\letThereBe{\Loglikely}{1}{\loglikely \brackets{#1}}
\LetThereBe{\CovMat}{\Gamma}
\LetThereBe{\covMat}{\vi \CovMat}
\LetThereBe{\rcovMat}{\vrr \CovMat}
\LetThereBe{\AIC}{\mathrm{AIC}}
\LetThereBe{\BIC}{\mathrm{BIC}}
\LetThereBe{\AICc}{\mathrm{AIC}_c}
\LetThereBe{\nullHypo}{H_0}
\LetThereBe{\altHypo}{H_1}
\LetThereBe{\rve}{\rnd \ve}
\LetThereBe{\rtht}{\rnd \theta}
\LetThereBe{\rX}{\rnd X}
\LetThereBe{\rY}{\rnd Y}
\LetThereBe{\rZ}{\rnd Z}
\LetThereBe{\rA}{\rnd A}
\LetThereBe{\rB}{\rnd B}
\LetThereBe{\vrZ}{\vr Z}
\LetThereBe{\vrY}{\vr Y}
\LetThereBe{\vrX}{\vr X}
% Bayesian inference
\LetThereBe{\paramSet}{\mcal T}
\LetThereBe{\sampleSet}{\mcal Y}
\LetThereBe{\bayesSigmaAlg}{\mcal B}
% Different types of convergence
\LetThereBe{\inDist}{\onTop{\to}{d}}
\letThereBe{\inDistWhen}{1}{\onBottom{\onTop{\longrightarrow}{d}}{#1}}
\LetThereBe{\inProb}{\onTop{\to}{P}}
\letThereBe{\inProbWhen}{1}{\onBottom{\onTop{\longrightarrow}{P}}{#1}}
\LetThereBe{\inMeanSq}{\onTop{\to}{L^2}}
\letThereBe{\inMeanSqWhen}{1}{\onBottom{\onTop{\longrightarrow}{L^2}}{#1}}
\LetThereBe{\convergeAS}{\tOnTop{\to}{a.s.}}
\letThereBe{\convergeASWhen}{1}{\onBottom{\tOnTop{\longrightarrow}{a.s.}}{#1}}
$$
$$
% Distributions
\letThereBe{\WN}{2}{\mathrm{WN}\brackets{#1,#2}}
\declareMathematics{\uniform}{Unif}
\declareMathematics{\binomDist}{Bi}
\declareMathematics{\negbinomDist}{NBi}
\declareMathematics{\betaDist}{Beta}
\declareMathematics{\betabinomDist}{BetaBin}
\declareMathematics{\gammaDist}{Gamma}
\declareMathematics{\igammaDist}{IGamma}
\declareMathematics{\invgammaDist}{IGamma}
\declareMathematics{\expDist}{Ex}
\declareMathematics{\poisDist}{Po}
\declareMathematics{\erlangDist}{Er}
\declareMathematics{\altDist}{A}
\declareMathematics{\geomDist}{Ge}
\LetThereBe{\normalDist}{\mathcal N}
%\declareMathematics{\normalDist}{N}
\letThereBe{\normalD}{1}{\normalDist \brackets{#1}}
\letThereBe{\mvnormalD}{2}{\normalDist_{#1} \brackets{#2}}
\letThereBe{\NormalD}{2}{\normalDist \brackets{#1, #2}}
\LetThereBe{\lognormalDist}{\log\normalDist}
$$
$$
% Game Theory
\LetThereBe{\doms}{\succ}
\LetThereBe{\isdom}{\prec}
\letThereBe{\OfOthers}{1}{_{-#1}}
\LetThereBe{\ofOthers}{\OfOthers{i}}
\LetThereBe{\pdist}{\sigma}
\letThereBe{\domGame}{1}{G_{DS}^{#1}}
\letThereBe{\ratGame}{1}{G_{Rat}^{#1}}
\letThereBe{\bestRep}{2}{\mathrm{BR}_{#1}\brackets{#2}}
\letThereBe{\perf}{1}{{#1}_{\mathrm{perf}}}
\LetThereBe{\perfG}{\perf{G}}
\letThereBe{\imperf}{1}{{#1}_{\mathrm{imp}}}
\LetThereBe{\imperfG}{\imperf{G}}
\letThereBe{\proper}{1}{{#1}_{\mathrm{proper}}}
\letThereBe{\finrep}{2}{{#2}_{#1{\text -}\mathrm{rep}}} %T-stage game
\letThereBe{\infrep}{1}{#1_{\mathrm{irep}}}
\LetThereBe{\repstr}{\tau} %strategy in a repeated game
\LetThereBe{\emptyhist}{\epsilon}
\letThereBe{\extrep}{1}{{#1^{\mathrm{rep}}}}
\letThereBe{\avgpay}{1}{#1^{\mathrm{avg}}}
\LetThereBe{\succf}{\pi} %successor function
\LetThereBe{\playf}{\rho} %player function
\LetThereBe{\actf}{\chi} %action function
% ODEs
\LetThereBe{\timeInt}{\mcal I}
\LetThereBe{\stimeInt}{\mcal J}
\LetThereBe{\Wronsk}{\mcal W}
\letThereBe{\wronsk}{1}{\Wronsk \parentheses{#1}}
\LetThereBe{\prufRadius}{\rho}
\LetThereBe{\prufAngle}{\vf}
\LetThereBe{\weyr}{\sigma}
\LetThereBe{\linDifOp}{\mathsf{L}}
\LetThereBe{\Hurwitz}{\vi H}
\letThereBe{\hurwitz}{1}{\Hurwitz \brackets{#1}}
% Cont. Models
\LetThereBe{\dirac}{\delta}
% PDEs
% \avint -- defined in format-respective tex files
\LetThereBe{\fundamental}{\Phi}
\LetThereBe{\fund}{\fundamental}
\letThereBe{\normaDeriv}{1}{\partialDeriv{#1}{\vec{n}}}
\letThereBe{\volAvg}{2}{\avint_{\ball{#1}{#2}}}
\LetThereBe{\VolAvg}{\volAvg{x}{\ve}}
\letThereBe{\surfAvg}{2}{\avint_{\boundary \ball{#1}{#2}}}
\LetThereBe{\SurfAvg}{\surfAvg{x}{\ve}}
\LetThereBe{\corrF}{\varphi^{\times}}
\LetThereBe{\greenF}{G}
\letThereBe{\reflect}{1}{\tilde{#1}}
\letThereBe{\unitBall}{1}{\alpha(#1)}
\LetThereBe{\conv}{*}
\letThereBe{\dotP}{2}{#1 \cdot #2}
\letThereBe{\translation}{1}{\tau_{#1}}
\declareMathematics{\dist}{dist}
\letThereBe{\regularizef}{1}{\eta_{#1}}
\letThereBe{\fourier}{1}{\widehat{#1}}
\letThereBe{\ifourier}{1}{\check{#1}}
\LetThereBe{\fourierOp}{\mcal F}
\LetThereBe{\ifourierOp}{\mcal F^{-1}}
\letThereBe{\FourierOp}{1}{\fourierOp\set{#1}}
\letThereBe{\iFourierOp}{1}{\ifourierOp\set{#1}}
\LetThereBe{\laplaceOp}{\mcal L}
\letThereBe{\LaplaceOp}{1}{\laplaceOp\set{#1}}
\letThereBe{\Norm}{1}{\absval{#1}}
% SINDy
\LetThereBe{\Koop}{\mcal K}
\letThereBe{\oneToN}{1}{\left[#1\right]}
\LetThereBe{\meas}{\mathrm{m}}
\LetThereBe{\stateLoss}{\mcal J}
\LetThereBe{\lagrm}{p}
% Stochastic analysis
\LetThereBe{\RiemannInt}{(\mcal R)}
\LetThereBe{\RiemannStieltjesInt}{(\mcal {R_S})}
\LetThereBe{\LebesgueInt}{(\mcal L)}
\LetThereBe{\ItoInt}{(\mcal I)}
\LetThereBe{\Stratonovich}{\circ}
\LetThereBe{\infMean}{\alpha}
\LetThereBe{\infVar}{\beta}
% Dynamical systems
\LetThereBe{\nUnit}{\mathrm N}
\LetThereBe{\timeUnit}{\mathrm T}
% BCF
\LetThereBe{\lieD}{\oper{L}}
\letThereBe{\lieDeriv}{1}{\lieD_{#1}}
\letThereBe{\nlieDeriv}{2}{\lieD^{#1}_{#2}}
\LetThereBe{\outerProd}{\otimes}
\LetThereBe{\wedgeProd}{\wedge}
\LetThereBe{\bialtProd}{\odot}
% Neural Networks
\LetThereBe{\neuralNet}{\mcal N}
\letThereBe{\allTo}{1}{#1_{\leftarrow}}
\letThereBe{\allFrom}{1}{#1^{\rightarrow}}
\LetThereBe{\actF}{\sigma}
\letThereBe{\extended}{1}{#1^{+}}
\declareMathematics{\flatten}{vec}
\LetThereBe{\hadamard}{\odot}
$$
This article was presented as a poster (which you can find here ) at Julia & Optimization Days 2023 in Paris – again more information along with the entire schedule can be found here .
With the abundance of data we have nowadays, it is only natural to want to extract some (potentially useful) information from it. What’s more, these observed systems can sometimes be thought of as being governed by some unknown differential equation, thus we may strive to learn this governing differential equation.
While a myriad of tools arose precisely for this job, one of the most popular is the Sparse Identification of Nonlinear Dynamics (SINDy) by Brunton and Kutz (2019 ) , which solves an \(l\) -dimensional regularized linear regression described by an optimization problem \[
\min_{\vi \Xi} \frac 1 2 \norm{\dvi X - \vi \Theta(\vi X) \vi \Xi}^2_2
\] subject to some sparsity inducing constraint . Although it can be solved using various optimization techniques, we propose Dynamic Sequentially Thresholded Least Squares (DSTLS) ; a modification of Sequentially Thresholded Least Squares (STLS) .
Definition 1 (DSTLS Optimization Problem) For a \(k\) -th variable of our system, we can estimate its derivative using a DSTLS optimization problem \[
\begin{gathered}
\min_{\vi \xi_k} \frac 1 2 \norm{\dvi X_{\cdot, k} - \Theta(\vi X)\vi \xi_k}^2_2 \\
\constraint \forall i \in \set{1, \dots, p}: \quad \absval{\xi_{k_i}} \geq \tau \cdot \max \absval{\vi \xi_k}.
\end{gathered}
\]
In the Definition 1 , the meaning of the symbols is as follows:
\(\vi X\) denotes the measurements of our system
\(\dvi X\) the derivatives (or their estimations) of our system
\(\vi \Theta(\vi X)\) is the library of candidate functions
\(\vi \xi_k\) symbolizes the \(k\) -th column of the coefficient matrix \(\vi \Xi\) , corresponding to the equation for the derivative of the \(k\) -th system variable
\(p\) is the number of candidate functions
\(\tau\) is the sparsity threshold
Such modification is motivated by the FitzHugh-Nagumo (FHN) model of a neuron \[\begin{align*}
\dot V &= V - \frac {V^3} 3 - W + i_e, \\
\dot W &= a \cdot (bV - cW + d),
\end{align*}\] with the following parameters and initial conditions \[
\begin{gathered}
a = 0.08, b = 1, c = 0.8, d = 0.7, i_e = 0.8, \\
V(0) = 3.3, W(0) = -2.
\end{gathered}
\]
The disparity in magnitudes of parameters between equations causes sensitivity to the value of threshold \(\tau\) with the ordinary STLS method. Choosing \(\tau\) too big, only a constant zero solution will be discovered for \(\dot W\) . For \(\tau\) small enough unnecessary terms will be identified for \(\dot V\) . Scaling the threshold by the largest absolute value of estimated parameters aims to address this issue.
For the illustration of the proposed method, let us assume the derivatives are unknown and our data are corrupted by a additive white Gaussian noise (AWGN) with its variance equal to 5 percent of the data’s variance. The derivative is estimated with total variation regularized numerical differentiation, which is computed on raw noisy data, and smooth the data using a total variation. At last, polynomials up to 4th order were used as a candidate library. DSTLS optimizer correctly chooses the appropriate candidate functions, unlike STLS.
\[\begin{align*}
\dot{V} =&\hphantom{+} 0.85 + 0.81 \cdot V + 0.27 \cdot W \cdot V ^2 + 0.28 \cdot V \cdot W ^2 \\
&- 1.02 \cdot W - 0.16 \cdot V ^2 - 0.22 \cdot V ^3 - 0.44 \cdot V \cdot W \\
&- 0.09 \cdot V ^2 \cdot W ^2,\\
\dot{W} =&\hphantom{+} 0.08 \cdot V
\end{align*}\]
\[\begin{align*}
\dot{V} =&\hphantom{+} 0.71 + 0.95 \cdot V - 0.32 \cdot V ^3 - 0.91 \cdot W ,\\
\dot{W} =&\hphantom{+} 0.05 + 0.08 \cdot V - 0.06 \cdot W
\end{align*}\]
More comparisons and use cases are presented in the poster (and in a work-in-progress article ). If I have time, I will add them here as well :).
All in all, I would like to thank the organizers of Julia & Optimization Days for such a great event, along with the participants who allowed for an interesting discussion. Last but not least, a big thanks go to my supervisor assoc. prof. Lenka Přibylová, who arranged most of the bureaucratic side of things.
Presenting the poster…
References
Brunton, Steven L., and J. Nathan Kutz. 2019.
Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control . Cambridge University Press.
https://doi.org/10.1017/9781108380690 .