text
stringlengths 56
7.94M
|
---|
\begin{document}
\begin{abstract} We prove the Liv\v{s}ic Theorem for H\"{o}lder continuous cocycles with values in Banach rings. We consider a transitive homeomorphism ${\ensuremath{\mathbf{\sigma}}:X\to X}$ that satisfies the Anosov Closing Lemma, and a H\"{o}lder continuous map ${a:X\to B^\times}$ from a compact metric space $X$ to the set of invertible elements of some Banach ring $B$. We show that it is a coboundary with a H\"{o}lder continuous transition function if and only if ${a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=e}$ for each periodic point $p=\ensuremath{\mathbf{\sigma}}^n p$.
\end{abstract}
\title{ Liv{s}
\section{Introduction}
We assume that $X$ is a compact metric space, $G$ a complete metric group, and $\ensuremath{\mathbf{\sigma}}:X\to X$ a homeomorphism.
We say that a map $a:\mathbb{Z}\times X\to G$ is {\it a cocycle} over \ensuremath{\mathbf{\sigma}}\ if
$$a(n,x)=a(n-k,\ensuremath{\mathbf{\sigma}}^kx)a(k,x)\quad\text{for any }n,k\in\ensuremath{\mathbb{Z}}$$
Every map $a:X\to G$ generates a cocycle $a(n,x)$ defined as $$a(n,x)=a(\ensuremath{\mathbf{\sigma}}^{n-1}x)a(\ensuremath{\mathbf{\sigma}}^{n-2}x)\ldots a(x) \quad n>0$$
$$a(0,x)=Id$$
$$a(n,x)= a^{-1}(\ensuremath{\mathbf{\sigma}}^{n}x)\ldots a^{-1}(\ensuremath{\mathbf{\sigma}}^{-2}x)a^{-1}(\ensuremath{\mathbf{\sigma}}^{-1}x)\quad n<0$$
We see that $a(1,x)=a(x)$. In this paper we consider only cocycles generated by H\"older continuous maps $a:X\to G$.
We say that a H\"older continuous map $a:X\to G$ is a {\it coboundary } (or more precisely generates a cocycle which is a coboundary) if there is a H\"older continuous function $t:X\to G$ such that
$$ a(x)=t(\ensuremath{\mathbf{\sigma}} x)t^{-1}(x)$$
The function $t(x)$ is a called a {transition map}.
If $a(x)$ is a coboundary then it is clear that
$$a(n,x)=t(\ensuremath{\mathbf{\sigma}}^n x)t^{-1}(x)$$
A question whether some cocycle is a coboundary or not appears naturally in many important problems in dynamical systems.
There is a simple necessary condition for a cocycle to be a coboundary. If $a(x)$ is a coboundary and $p\in X$ is a periodic point $\ensuremath{\mathbf{\sigma}}^n p=p$ then
$$a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=a(n,p)=t(\ensuremath{\mathbf{\sigma}}^n p)t^{-1}(p)=e$$
where $e$ is the identity element in the group $G$.
We say that for a cocycle $a(n,x)$ {\it periodic obstruction vanish} if
\begin{equation}\ensuremath{\lambda}bel{e0} a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=e\quad\forall p\in X\text{ with } \ensuremath{\mathbf{\sigma}}^np=p, n\in\mathbb{N} \end{equation}
A.Liv\v{s}ic (see ~\cite{L1,L2}) proved that when \ensuremath{\mathbf{\sigma}}\ is a transitive Anosov map and the group $G$ is $\mathbb{R}$ or $\mathbb{R}^n$ then a cocycle $a(x)$ is a coboundary if and only if the periodic obstruction vanish. This result is called Liv\v{s}ic theorem. The proof of the Liv\v{s}ic theorem for other groups turned out to be harder. Nevertheless, in the last twenty years in the series of papers (see \cite{BN},\cite{PW},\cite{P},\cite{KS},\cite{NT},\cite{LW}) it was shown that for some groups under an additional assumption on the growth rates of the cocycle $a(n,x)$ the condition (\ref{e0}) is also sufficient. For example, in \cite{BN} it was shown that if $G=B^\times$ the set of invertible elements of some Banach algebra then if periodic obstruction vanish and $a(x)$ is close to the identity element $e$ then it is a coboundary.
The question remained if this additional assumption will follow from the fact that the products along periodic points are equal to $e$. In 2011 B.Kalinin in \cite{Ka} made a breakthrough by proving the Liv\v{s}ic theorem for functions with values in $GL(n,\mathbb{R})$ and more generally in a connected Lie group assuming only that condition (\ref{e0}) is satisfied.
He used Lyapunov exponents for different invariant measures to estimate the rate of the cocycle growth and then approximated Lyapunov exponents for all invariant measures by Lyapunov exponents only at periodic points. To do the latter the Oseledets Theorem was used. In this paper, we are proving that a cocycle with values in invertible elements of Banach ring is a coboundary if and only if periodic obstructions vanish. There is no analogs of the Oseledets Theorem for Banach rings ( or even Banach algebras). Still we can define analogs of the highest and lowest Lyapunov exponents and using a different argument show that they could be approximated by the values of the cocycle at periodic points. Examples of Banach rings include, Banach algebras, and Banach algebras with $\ensuremath{\mathbb{F}}$ as a field of scalars, where $\ensuremath{\mathbb{F}}$ is a local field. For them it is a new result. Also several already known results follow: Liv\v{s}ic Theorem for cocycles with values in $GL(n,\ensuremath{\mathbb{R}})$ (see \cite{Ka}) and $GL(n,\ensuremath{\mathbb{F}})$ (see \cite{LZ}).
As in \cite{Ka} we require that the map \ensuremath{\mathbf{\sigma}}\ was transitive and had the following property
\begin{definition} We say that a homeomorphism $\ensuremath{\mathbf{\sigma}}:X\to X$ has a {\it closing property} if there exist positive numbers $\delta_0, \ensuremath{\lambda},C$ such that for any $x\in X$ and $n>0$ with $\text{dist}(x,\ensuremath{\mathbf{\sigma}}^n x)\le \delta_0$ we can find points $p,z\in X$ where
$$\ensuremath{\mathbf{\sigma}}^n p=p$$
and for every $i=0,1,\ldots, n$
$$\text{dist}(\ensuremath{\mathbf{\sigma}}^i p,\ensuremath{\mathbf{\sigma}}^i z)\le e^{-i\ensuremath{\lambda}}C\text{dist}(x,\ensuremath{\mathbf{\sigma}}^n x)\quad \text{dist}(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i z)\le e^{-(n-i)\ensuremath{\lambda}}C\text{dist}(x,\ensuremath{\mathbf{\sigma}}^n x) $$
We will call $\ensuremath{\lambda}mbda$ the expansion constant for the map \ensuremath{\mathbf{\sigma}}.
\end{definition}
Anosov maps and shifts of finite types are main examples of maps with closing property.
\begin{definition} An associative (non--commutative) ring $B$ with the unity element $e$ is called {\it Banach ring} if there is a function $\|\cdot\|:B\to\ensuremath{\mathbb{R}}$ such that
\begin{enumerate}
\item $\|a\|\ge 0$ and $\|a\|=0$ if and only if $a=0$.
\item $\|a+b\|\le \|a\|+\|b\|$.
\item $\|a\cdot b\|\le \|a\|\cdot \|b\|$.
\item The ring $B$ is a complete metric space with respect to the distance defined as $dist(a,b)=\|a-b\|$.
\end{enumerate}
\end{definition}
We denote as $B^\times$ the set of invertible elements of a Banach ring $B$. The main result of this paper is:
\begin{main}\ensuremath{\lambda}bel{t2} Let $X$ be a compact metric space, $\ensuremath{\mathbf{\sigma}}:X\to X$ a transitive homeomorphism with closing property. If $a:X\to B^\times $ is an $\alpha$-H\"older continuous function such that
$$ a(\ensuremath{\mathbf{\sigma}}^{n-1}p)\ldots a(\ensuremath{\mathbf{\sigma}} p)a(p)=e\quad \forall p\in X, n\in \mathbb{N} \text{ with } \ensuremath{\mathbf{\sigma}}^np=p$$
then there exists an $\alpha$-H\"older function $t:X\to B^\times$ such that
$$ a(x)=t(\ensuremath{\mathbf{\sigma}} x)t^{-1}(x)$$
\end{main}
\section{Subadditive Cocycles}
Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a continuous function. We will call a continuous function $s(n,x):\mathbb{Z}\times X\to \mathbb{R}$ \textit{ a subadditive cocycle} over the function $\ensuremath{\mathbf{\sigma}}$ if
$$s(n+m,x)\le s(n,\ensuremath{\mathbf{\sigma}}^m x)+s(m,\ensuremath{\mathbf{\sigma}} x)$$
From the Kingman's Theorem about subadditive cocycles \cite{Ki, Furstenberg} follows that for every $\ensuremath{\mathbf{\sigma}}$-invariant measure $\mu$ and for almost all $x$ there exists a number
\begin{equation}\ensuremath{\lambda}bel{e1} r(x)=\lim_{n\to\infty} \frac{s(n,x)}{n}\end{equation}
If $\mu$ is ergodic then this number is the same for a.a $x$ and equals $\displaystyle{\inf_{n\ge 1}\int_X \frac{s(n,x)}{n}d\mu}$. For an ergodic $\mu$ we will call this number $r_\mu$. The set of all \ensuremath{\mathbf{\sigma}}-invariant ergodic measures we denote as $\mathcal{M}$. The set of points $x\in X$ for which limit $(\ref{e1})$ exists we call regular and denote as $\mathcal{R}$
Of course, there could be points for which the limit in $(\ref{e1})$ does not exist.
We can also consider numbers $s_n=\displaystyle{\max_x s(n,x)}$. It is a subadditive sequence of numbers $s_{n+m}\le s_n+s_m$ and we denote as $r$ the following number:
\begin{equation}\ensuremath{\lambda}bel{e2}\displaystyle{r=\lim_{n\to\infty}\frac{s_n}{n}=\inf_{n\ge 1} \frac{s_n}{n}}
\end{equation}
It is known (see \cite{S}) that if $\ensuremath{\mathbf{\sigma}}$ is continuous and $X$ is compact then
\begin{equation}\ensuremath{\lambda}bel{e3} r=\sup_{x\in\mathcal{R}} r(x)=\sup_{\mu\in\mathcal{M}} r_\mu\end{equation}
For a periodic point $p=\ensuremath{\mathbf{\sigma}}^k p$ we denote as $r_{p}$ the following quantity
$$r_{p}=\frac{ s(k,p)}{k}$$
It is easy to see that $r(p)$ exists (but can be $-\infty$) and $r(p)\le r_p$.
We will show that if $\ensuremath{\mathbf{\sigma}}$ has a closing property we can prove that:
\begin{theorem}\ensuremath{\lambda}bel{t3} Let $X$ be a compact metric space, $\ensuremath{\mathbf{\sigma}}:X\to X$ a homeomorphism with closing property. We denote as $\mathcal{P}$ the set of all periodic points. If $a:X\to B^\times $ is an $\alpha$-H\"older continuous function, $a(n,x)$ is a cocycle generated by it and ${s(n,x)=\ln \|a(n,x)\| }$ then
\begin{equation}\ensuremath{\lambda}bel{e3} r=\sup_{x\in\mathcal{R}} r(x)=\sup_{\mu\in\mathcal{M}} r_\mu\le \sup_{p\in\mathcal{P}}r_p \end{equation}
\end{theorem}
An easy corollary of this theorem is the following important for us result.
\begin{corollary} \ensuremath{\lambda}bel{c3} Let $X$ be a compact metric space, $\ensuremath{\mathbf{\sigma}}:X\to X$ a homeomorphism with closing property. If $a(n,p)=e$ for every periodic point $p$ with period $n$ then for any $\ensuremath{\varepsilon}>0$ there exists $C$ such that for all integer positive $n$ and all $x\in X$
$$\|a(n,x)\|\le Ce^{\ensuremath{\varepsilon} n}$$
$$\|a(-n,x)\|\le Ce^{\ensuremath{\varepsilon} n}$$
$$\|[a(n,x)]^{-1}\|\le Ce^{\ensuremath{\varepsilon} n}$$
\end{corollary}
\begin{proof} The first inequality follows from the fact that if $s(n,x)=\ln\|a(n,x)\|$ then for this subadditive cocycle $r_p=0$ for every periodic point $p$ and from (\ref{e3}) follows that $r=0$. For the second inequality we can consider a cocycle $b(n,x)$ over $\ensuremath{\mathbf{\sigma}}^{-1}$ generated by $a^{-1}(x)$. Below, we will prove that if $a(x)$ is H\"{o}lder continuous then $a^{-1}(x)$ is also H\"{o}lder continuous, and therefore we can apply Theorem \ref{t3} to the cocycle $b(n,x)$ also. But $a(-n,x)=b(n,x)$ and if $a(n,p)=e$ for every periodic point then
$$b(n,p)=a(-n,p)=a(-n,\ensuremath{\mathbf{\sigma}}^np)=[a(n,p)]^{-1}=e$$
So the rate of growth $r$ for $b(n,x)$ is also 0.
The last inequality follows from the fact that
$$[a(n,x)]^{-1}=b(n,\ensuremath{\mathbf{\sigma}}^n x)$$
The only thing left to show is that if $a(x)$ is $\alpha$-H\"{o}lder continuous then $a^{-1}(x)$ is also $\alpha$-H\"{o}lder continuous. For normed rings the operation of taking the inverse element is continuous (see \cite{Na}). Therefore the function $a^{-1}(x)$ is bounded. But
$$\|a^{-1}-b^{-1}\|=\|b^{-1}(b-a)a^{-1}\|\le\|b^{-1}\|\cdot\|(b-a)\|\cdot\|a^{-1}\|$$
Therefore, if the function $a(x)$ is $\alpha$-H\"{o}lder continuous, then $a^{-1}(x)$ is also $\alpha$-H\"{o}lder continuous.
\end{proof}
\section{Proof of the Theorem \ref{t3}}
The following result proven in \cite[Proposition 4.2]{MK} will be used.
\begin{lemman}[A. Karlsson, G. A. Margulis]\ensuremath{\lambda}bel{MK} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a measurable map, $\mu$ an ergodic measure, $s(n,x)$ a subadditive cocycle. For any $\ensuremath{\epsilon}>0$, let $E_\ensuremath{\epsilon}$ be the set of $x$ in $X$ for which there exist an integer $K(x)$ and infinitely many $n$ such that
$$s(n,x)-s(n-k,\ensuremath{\mathbf{\sigma}}^kx)\ge (r_\mu-\ensuremath{\epsilon})k$$
for all $k, K(x)\le k\le n$. Let $E=\cap_{\ensuremath{\epsilon}>0} E_{\ensuremath{\epsilon}}$ then $\mu(E)=1$.
\end{lemman}
If $s(n,x)=\ln\|a(n,x)\|$ then the inequality in the lemma could be rewritten as
\begin{equation}\|a(n-k,\ensuremath{\mathbf{\sigma}}^k x)\|\le \|a(n,x)\|e^{-(r_\mu-\ensuremath{\epsilon})k}\ensuremath{\lambda}bel{fmk}\end{equation}
\begin{definition} Let $\gamma,\delta$ be some positive numbers and $n$ is a natural number. We say that a point $y$ is $(\gamma,\delta,n)-$close to $x$ if
$$dist(\ensuremath{\mathbf{\sigma}}^k x,\ensuremath{\mathbf{\sigma}}^k y)\le \delta e^{-\gamma k} \quad \text{for all}\quad 0\le k\le n$$
\end{definition}
\begin{prop} \ensuremath{\lambda}bel{l5} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a homeomorphism and $a:X\to B^\times$ be an $\alpha$-H\"older continuous function, and $s(n,x)=\ln\|a(n,x)\|$. For any $\gamma>0$ let $S_\gamma$ be the set of points $x$ in $X$ for which there exist a number $\delta(x)>0$ and infinitely many $n$ such that for any point $y$ which is $(\gamma,\delta,n)-$close to $x$
\begin{equation}\ensuremath{\lambda}bel{f11}
\|a(n,y)\|\ge \frac12\|a(n,x)\|
\end{equation}
Then $\mu(S_\gamma)=1$ for any ergodic invariant measure $\mu$ with $r-r_\mu<\alpha\gamma$.
\end{prop}
\begin{proof}
Let $\mu$ be an ergodic invariant measure with $r-r_\mu<\alpha\gamma$. We choose a number $0<\ensuremath{\varepsilon}<\frac13(\gamma\alpha-(r-r_\mu))$. Almost all points with respect to this measure and \ensuremath{\varepsilon}\ satisfy Karlsson-Margulis Lemma and for almost all points the number $r(x)=r_\mu$. Take a point $x$ from the intersection of those two sets. Using the identity
$$b_na_{n-1}\ldots b_1-a_na_{n-1}\ldots a_1=\sum_{k=1}^n b_n\ldots b_{k+1}(b_{k}-a_{k})a_{k-1}\ldots a_1$$
we can see that
$$\|a(n,x)-a(n,y)\|=\|\sum_{k=0}^{n-1} a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)[a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)]a(k,y)\|\le$$
\begin{equation}\ensuremath{\lambda}bel{f4}
\sum_{k=0}^{n-1} \|a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)\|\cdot\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\cdot \|a(k,y)\|
\end{equation}
Our goal is to show that if we choose a sufficiently small $\delta$ then for infinitely many numbers $n$ and for every $(\gamma,\delta,n)-$close $y$ the sum $(\ref{f4})$ is smaller than $ \frac12\|a(n,x)\|$.
Let $K(x,\ensuremath{\varepsilon})$ and $n$ be as in the Karlsson-Margulis Lemma and a point $y$ is $(\gamma,\delta,n)-$close to $x$ for some $\delta$ that we specify later. By definition $\displaystyle{r=\lim_{k\to\infty} s_k/k}$, so we can find $K\ge K(x,\ensuremath{\varepsilon})$ such that $s_{k}<k(r+\ensuremath{\varepsilon})$ for all $k\ge K$, or $\|a(k,x)\|<e^{k(r+\ensuremath{\varepsilon})}$. For every $k> K$ factors in the product
\begin{equation}\ensuremath{\lambda}bel{f3} \|a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)\|\cdot\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\cdot \|a(k,y)\|
\end{equation} could be bounded from above as \\
$\|a(n-k-1,\ensuremath{\mathbf{\sigma}}^k x)\|\le \|a(n,x)\|e^{- (r_\mu-\ensuremath{\epsilon}silon)(k+1)}\quad$ It follows from the Karlsson-Margulis Lemma. \\
$\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\le H\delta^\alpha e^{-k\gamma\alpha}$ where $H$ is some positive constant. It follows from the fact that $a(x)$ is H\"older continuous and $y$ is $(\gamma,\delta,n)$-close to $x$.\\
$ \|a(k,y)\|\le e^{s_{k}}\le e^{k(r+\ensuremath{\varepsilon})}$ It follows from the definition of $K$.\\
If we combine those inequalities we can see that that the number in the product (\ref{f3}) is smaller than
$$ \|a(n,x)\|e^{- (r_\mu-\ensuremath{\epsilon}silon)(k+1)}\cdot H\delta^\alpha e^{-k\gamma\alpha}\cdot e^{k(r+\ensuremath{\varepsilon})}\le \|a(n,x)\|H\delta^\alpha e^{-k(\gamma\alpha-(r-r_\mu)-2\ensuremath{\varepsilon})}$$
After simplification we can write that
$$ \|a(n-k-1,\ensuremath{\mathbf{\sigma}}^{k+1} x)\|\cdot\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\cdot \|a(k,y)\|\le \|a(n,x)\|H\delta^\alpha e^{-k\ensuremath{\varepsilon}}
$$
If we add those inequalities for $k\ge K$ we can see that
$$\|\sum_{k=K}^n a(n-k+1,\ensuremath{\mathbf{\sigma}}^{k+1} x)[a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)]a(k,y)\|\le$$
$$ \|a(n,x)\|H\delta^\alpha \sum_{k=0}^\infty e^{-k\ensuremath{\varepsilon}}= \|a(n,x)\|\frac{H\delta^\alpha}{1-e^{-\ensuremath{\varepsilon}}}$$
To estimate the number in the formula (\ref{f3}) for $k<K$ we denote as
$$M=1+\max_x ||a(x)||$$
$$m=1+\max_x ||a^{-1}(x)||$$
Then as before
$$\|a(\ensuremath{\mathbf{\sigma}}^{k}x)-a(\ensuremath{\mathbf{\sigma}}^{k}y)\|\le H \delta^\alpha e^{-k\gamma\alpha}$$
but
$$||a(n-k+1,\ensuremath{\mathbf{\sigma}}^{k+1} x)||=||a(-k+1,\ensuremath{\mathbf{\sigma}}^{n}x)a(n,x)||\le ||a(n,x)||\cdot m^k$$
and
$$||a(k,y)||\le M^{k}$$
So for $k<K$ the expression (\ref{f3}) is bounded by
$$||a(n,x)|| m^k\cdot H\delta^\alpha e^{-k\gamma\alpha}\cdot M^{k}\le ||a(n,x)||H\delta^\alpha (mM)^K$$
Finally,
$$||a(n,x)-a(n,y)||\le ||a(n,x)||\delta^\alpha \left(\frac{H}{1-e^{-\ensuremath{\varepsilon}}}+K(mM)^K\right)=||a(n,x)||\delta'$$
By choosing $\delta$ sufficiently small we can make $\delta'<1/2$. Then
$$||a(n,y)||= ||a(n,x)-(a(n,x)-a(n,y))||\ge ||a(n,x)||-||a(n,x)-a(n,y)||\ge$$
$$\ge \frac12|| a(n,x)||$$
\end{proof}
To finish the proof of the Theorem \ref{t3} we will need the following features of the maps with closing property.
\begin{lemma}\ensuremath{\lambda}bel{l6} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a homeomorphism with closing property and the expansion constant $\ensuremath{\lambda}$, then for any positive numbers $\ensuremath{\varepsilon}$ and $\delta$ there is a number $\delta'$ such that if $dist(x,\ensuremath{\mathbf{\sigma}}^kx)\le\delta'$ and $k\ge n(1+\ensuremath{\varepsilon})$ then there is a point $p$ such that $\ensuremath{\mathbf{\sigma}}^k p=p$ and $p$ is $(\gamma,\delta,n)-$close to $x$, where $\gamma=\ensuremath{\varepsilon}\ensuremath{\lambda}mbda$.
\end{lemma}
\begin{proof} It follows from the definition of the closing property that for $0\le i\le k$
$$dist(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i p)\le dist(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i z)+dist(\ensuremath{\mathbf{\sigma}}^i z,\ensuremath{\mathbf{\sigma}}^i p)\le 2C\delta' e^{-\ensuremath{\lambda}mbda \min(i,k-i) }$$
The function $-\ensuremath{\lambda}mbda\min(x,k-x)$ is convex downward so the segment connecting points $(0,0)$ and $(n,-\ensuremath{\lambda}mbda\min(n,k-n))$ on the graph of this function stays above the graph. The linear function that corresponds to this segment is $-\gamma x$ where
$$\gamma=\frac{k-n}{n}\ensuremath{\lambda}mbda>\ensuremath{\varepsilon}\ensuremath{\lambda}mbda$$
Therefore the point $p$ satisfies the following inequalities:
$$dist(\ensuremath{\mathbf{\sigma}}^i x,\ensuremath{\mathbf{\sigma}}^i p)\le 2 C\delta' e^{-\gamma i } \quad 0\le i\le n$$
If we take $\delta'=\frac{\delta}{2 C}$ we can see that $p$ is $(\gamma,\delta,n)-$close to $x$.
\end{proof}
\begin{lemma}\ensuremath{\lambda}bel{l7} Let $\ensuremath{\mathbf{\sigma}}:X\to X$ be a homeomorphism. For any $\ensuremath{\varepsilon},\delta>0$ let $P_{\ensuremath{\epsilon},\delta}$ be the set of points $x$ in $X$ for which there is an integer number $N=N(x,\ensuremath{\epsilon},\delta)$ such that if $n>N$ then there is an integer $ n(1+\ensuremath{\varepsilon})<k<n(1+2\ensuremath{\varepsilon})$ for which
$$dist(x,\ensuremath{\mathbf{\sigma}}^k x)<\delta$$
If $\displaystyle{P=\cap_{\ensuremath{\varepsilon}>0,\delta>0} P_{\ensuremath{\varepsilon},\delta}}$ then $\mu(P)=1$ for any invariant measure $\mu$.
\end{lemma}
\begin{proof} It is enough to prove it only for ergodic invariant measures. Let $\mu$ be an invariant ergodic measure. The support of a measure is the the set of all points in $X$ such that the measure of any open ball centered at $x$ is not 0. The support of a measure on a compact metric space has always full measure {(see \cite{Fe})}. $X$ is compact so there is a sequence of balls $B_i$ which is a base of the topology. If we define as $f_i(n,x)$ the number of such $k$ that $\ensuremath{\mathbf{\sigma}}^k x\in B_i$ and $1\le k\le n$ then by Birkhoff's Ergodic Theorem $\displaystyle{\lim_{n\to\infty} \frac{f_i(n,x)}{n}}$ exists and equals $\mu(B_i)$ for almost all $x$. It is easy to see that any $x$ that belongs to the support of the measure and satisfies Birkhoff's Ergodic Theorem for all $i$ will belong to the set $P$. Indeed, if we choose $\delta>0$ then we know that the ball $B_\delta$ centered at $x$ should have measure greater than 0. This ball is a countable union of some of the balls $B_i$, therefore there exists at least one ball $B_{i_0}$ such that $\mu(B_{i_0})>0$ and $B_{i_0}\subset B_\delta$. Now, using the numbers $\ensuremath{\varepsilon}$ and $\mu(B_{i_0})$ we choose a very small $\ensuremath{\epsilon}$. How small we specify later. For this $\ensuremath{\epsilon}>0$ we can find $N$ such that if $n>N$ then $|f_{i_0}(n,x)-\mu(B_{i_0}n)|<\ensuremath{\epsilon} n$. If $n>N$ and there is no $k$ such that ${n(1+\ensuremath{\varepsilon})<k<n(a+2\ensuremath{\varepsilon})}$ and ${\ensuremath{\mathbf{\sigma}}^k x\in B_{i_0}}$ then $f_{i_0}(n(1+\ensuremath{\varepsilon}),x)=f_{i_0}(n(1+2\ensuremath{\varepsilon}),x)$. It is impossible if we choose $\ensuremath{\epsilon}$ very small because in this case
$$ (\mu(B_{i_0})+\ensuremath{\epsilon})n(1+\ensuremath{\varepsilon})\ge f_{i_0}(n(1+\ensuremath{\varepsilon},x)=f_{i_0}(n(1+2\ensuremath{\varepsilon},x)\ge (\mu(B_{i_0})-\ensuremath{\epsilon})n(1+2\ensuremath{\varepsilon})$$
or
$$\frac{\mu(B_{i_0})+\ensuremath{\epsilon}}{\mu(B_{i_0})-\ensuremath{\epsilon}}\ge \frac{1+2\ensuremath{\varepsilon}}{1+\ensuremath{\varepsilon}}$$
When $\ensuremath{\epsilon}$ is small the left side is as close to 1 as we want, so we get a contradiction. It means, if $N$ is sufficiently big and $n>N$ then there is $k$ such that ${\ensuremath{\mathbf{\sigma}}^k\in B_{i_0}\subset B_\delta }$ and ${n(1+\ensuremath{\varepsilon})\le k\le n(1+2\ensuremath{\varepsilon})}$. Therefore the set $P$ includes the intersection of two sets of full measure and has the full measure.
\end{proof}
{\it Proof of the Theorem \ref{t3}:} Choose any $\ensuremath{\varepsilon}>0$. We can find an ergodic invariant measure $\mu$ such that ${r-r_\mu<min(\ensuremath{\varepsilon},\ensuremath{\varepsilon}\alpha\ensuremath{\lambda}mbda)}$. Choose a point $x$ such that $r(x)=r_\mu$ and $x$ belongs to the set $S_{\ensuremath{\varepsilon}\ensuremath{\lambda}mbda}\cap P$ where $S_{\ensuremath{\varepsilon}\ensuremath{\lambda}mbda}$ as in the Proposition \ref{l5} and $P$ as in the Lemma \ref{l7}. All those sets have full support, so their intersection is not empty. For the point $x$ we can find $\delta$ such that for infinitely many $n_i$ if a point $p$ is $(\ensuremath{\varepsilon}\ensuremath{\lambda}mbda,\delta,n_i)-$close to $x$ then
\begin{equation}\ensuremath{\lambda}bel{f10}\|a(n_i,p)\|\ge\frac12\|a(n_i,x)\|\end{equation}
For this $\delta$ we can find $\delta'$ from Lemma \ref{l6}. Using this $\delta'$ and $\ensuremath{\varepsilon}$ we can find $N=N(\ensuremath{\varepsilon},\delta')$ from the Lemma \ref{l7} such that if $n_i>N$ then there is $k$ such that $n_i(1+\ensuremath{\varepsilon})\le k\le n_i(1+2\ensuremath{\varepsilon})$ and $dist(\ensuremath{\mathbf{\sigma}}^kx,x)<\delta'$, then from Lemma \ref{l6} follows that there is a periodic point $p$ with the period $k$ such that it is $(\ensuremath{\varepsilon}\ensuremath{\lambda}mbda,\delta,n_i)-$close to $x$ and therefore satisfies the {inequality (\ref{f10})}.
Now, we estimate $\|a(k,p)\|$. Let $N'$ be a number such that if $n>N'$ then
$$\|a(n,x)\|\ge e^{n(r_\mu-\ensuremath{\varepsilon})}\ge e^{n(r-2\ensuremath{\varepsilon})}$$
We always can choose $n_i$ bigger not only than $N$ but also and $N'$. Denote as ${m=\ln\max_y\|a^{-1}(y)\|}$. Then
$$\|a(n_i,p)\|=\|a(-(k-n_i),p)a(k,p)\|\le \|a(-(k-n_i),p)\|\cdot\|a(k,p)\|$$
so
$$\|a(k,p)\|\ge \frac{\|a(n_i,p)\|}{e^{m(k-n_i)}}\ge\frac12\frac{\|a(n_i,x)\|}{e^{2m\ensuremath{\varepsilon} n_i}}\ge \frac12 e^{(r-2\ensuremath{\varepsilon}-2m\ensuremath{\varepsilon})n_i}$$
We see that
$$r_p=\frac{\ln\|a(k,p)\|}{k}\ge\frac{(r-2\ensuremath{\varepsilon}-2m\ensuremath{\varepsilon})n_i-\ln 2}{(1+2\ensuremath{\varepsilon})n_i}$$
Number $m$ does not depend on the choice of $x,n_i$ and $p$, so by choosing $\ensuremath{\varepsilon}$ very small and $n_i$ very big we can make $r_p$ as close to $r$ as we want.
\qed\\
\section{Proof of the Main Theorem }
After Theorem \ref{t3} is established we can use Corollary \ref{c3} to show that the growth of $\|a(n,x)\|$ is sub-exponential. It allows to use the idea of the original Liv\v{s}ic proof for cocycles with values in Banach rings.
H.Bercovici and V.Nitica in \cite{BN} (Theorem 3.2) showed that if $\ensuremath{\mathbf{\sigma}}$ is a transitive Anosov map, periodic obstructions vanish and
\begin{equation}\ensuremath{\lambda}bel{f11}\begin{split}\|a(x)\|&\le 1+\delta\\
\|a^{-1}(x)\|&\le 1+\delta
\end{split}\end{equation}
for some $\delta$ that depends on $\ensuremath{\mathbf{\sigma}}$, then $a(x)$ is a coboundary. From Corollary \ref{c3} we can get a little bit less. If periodic obstructions vanish then for any $\delta>0$ there exists $C>0$ such that for any positive integer $n$
\begin{equation*}\begin{split}\|a(n,x)\|&\le C(1+\delta)^n\\
\|[a(n,x)]^{-1}\|&\le C(1+\delta)^n
\end{split}\end{equation*}
Those inequalities are actually enough to repeat the arguments from \cite{BN} with some small changes, but we also refer to more general theorem proven in \cite{G} that considers cocycles over maps that satisfy closing property and with values in abstract groups satisfying some conditions. But we will need couple of more definitions.
\begin{definition} If $G$ is a group with metric denoted as $dist$ and $g\in G$ we define the distortion of the element $g$ as
$$|g|=\sup_{f\neq g}\left[ \frac{dist(gf,gh)}{dist{(f,h)}},\frac{dist(fg,hg)}{dist{(f,h)}},\frac{dist(g^{-1}f,g^{-1}h)}{dist{(f,h)}},\frac{dist(fg^{-1},hg^{-1})}{dist{(f,h)}}\right]$$
We say that a group is Lipschitz if $|g|<\infty$ for all $g\in G$.
\end{definition}
It is easy to see that for Banach rings if we define $$dist(f,h)=\max(\|f-h\|,\|f^{-1}-g^{-1}\|)$$ then $|g|\le \max(\|g\|,\|g^{-1}\|)$ and $B^\times$ is Lipschitz.
\begin{definition} We call the rate of distortion of a cocycle $a(x):X\to G$ the following number
$$r(a)=\lim_{n\to\infty} \frac{\sup_{x\in X}\ln |a(n,x)|}{n}$$
\end{definition}
\begin{theorem}\ensuremath{\lambda}bel{t9} Let $G$ be a Lipschitz group with the property that there are numbers $\ensuremath{\epsilon}$ and $D$ such that $dist(g,e)\le\ensuremath{\epsilon}$ implies $|g|\le D$, \ensuremath{\mathbf{\sigma}}\ be a transitive homeomorphism with $\ensuremath{\lambda}mbda$-closing property. If the rate of the distortion of an $\alpha$-H\"older continuous cocycle $a:X\to G$ is smaller than $\alpha\ensuremath{\lambda}/2$ and the periodic obstructions vanish then $a(x)$ is a coboundary with $\alpha$-H\"older continuous transition function $t(x)$.
\end{theorem}
\begin{proof} See \cite{G}\end{proof}
{\it Proof of the Main Theorem.} If $a(n,p)=e$ for every periodic point then it follows from the Corollary \ref{c3} that the distortion rate of the $a(n,x)$ is less or equal than 0. In the group $B^\times$ if $dist(e,g)<\frac12$ then $\|e-g\|<\frac12$ and $\|e-g^{-1}\|<\frac12$ so $\|g\|,\|g^{-1}\|\le\frac32$. We see that $|g|<\frac32$, therefore by Theorem \ref{t9} the cocycle $a(n,x)$ is a coboundary with
$\alpha$-H\"older continuous transition function.\qed
\end{document} |
\begin{document}
\pagenumbering{gobble}
\begin{titlepage}
\title{Sensitivity Oracles for All-Pairs Mincuts}
\author{
Surender Baswana\thanks{Department of Computer Science \& Engineering, IIT Kanpur, Kanpur -- 208016, India, [email protected]}
\and
Abhyuday Pandey\thanks{Department of Computer Science \& Engineering, IIT Kanpur, Kanpur -- 208016, India, [email protected]}
}
\maketitle
\begin{abstract}
{
Let $G=(V,E)$ be an undirected unweighted graph on $n$ vertices and $m$ edges. We address the problem of sensitivity oracle for all-pairs mincuts in $G$ defined as follows.
Build a compact data structure that, on receiving any pair of vertices $s,t\in V$ and failure (or insertion) of any edge as query, can efficiently report the mincut between $s$ and $t$ after the failure (or the insertion).
To the best of our knowledge, there exists no data structure for this problem which takes $o(mn)$ space and a non-trivial query time.
We present the following results.
\begin{enumerate}
\item Our first data structure occupies ${\cal O}(n^2)$ space and guarantees ${\cal O}(1)$ query time to report the value of resulting $(s,t)$-mincut upon failure (or insertion) of any edge. Moreover, the set of vertices defining a resulting $(s,t)$-mincut after the update can be reported in ${\cal O}(n)$ time which is worst-case optimal.
\item
Our second data structure optimizes space at the expense of increased query time. It takes ${\cal O}(m)$ space -- which is also the space taken by $G$. The query time is ${\cal O}(\min(m,n c_{s,t}))$ where $c_{s,t}$ is the value of the mincut between $s$ and $t$ in $G$. This query time is faster by a factor of $\Omega(\min(m^{1/3},\sqrt{n}))$ compared to the best known deterministic algorithm \cite{DBLP:conf/focs/GoldbergR97a,DBLP:conf/stoc/KargerL98,DBLP:journals/corr/abs-2003-08929} to compute a $(s,t)$-mincut from scratch.
\item
If we are only interested in knowing if failure (or insertion) of an edge changes the value of $(s,t)$-mincut, we can distribute our ${\cal O}(n^2)$ space data structure evenly among $n$ vertices. For any failed (or inserted) edge we only require the data structures stored at its endpoints to determine if the value of $(s,t)$-mincut has changed for any $s,t \in V$.
Moreover, using these data structures we can also output efficiently a compact encoding of all pairs of vertices whose mincut value is changed after the failure (or insertion) of the edge.
\end{enumerate}
}
\end{abstract}
\end{titlepage}
\pagebreak
\pagenumbering{arabic}
\section{Introduction}
\subfile{src/introduction}
\section{Preliminaries} \label{sec:prelimiaries}
\subfile{src/preliminaries}
\section{Insights into \texorpdfstring{$3$}{3}-vertex mincuts} \label{sec:query-transformation}
\subfile{src/compact-graph-query-transf}
\section{A Compact Graph for Query Transformation}
\subfile{src/ft-steiner-connectivity}
\section{Compact Data Structure for Sensitivity Query} \label{sec:final-ds}
\subfile{src/graph-contractions}
\section{Distributed Sensitivity Oracle}
\label{sec:distributed-sensitivity-oracle}
\subfile{src/distributed-sensitivity-oracle}
\section{Conclusion}
\label{sec:conclusion}
\subfile{src/conclusion}
\pagebreak
\pagebreak
\appendix
\subfile{src/appendix}
\end{document} |
\begin{document}
\mainmatter
\title{Learning with a Drifting Target Concept}
\titlerunning{Learning with a Drifting Target Concept}
\author{Steve Hanneke \and Varun Kanade \and Liu Yang}
\authorrunning{Steve Hanneke, Varun Kanade, and Liu Yang}
\institute{Princeton, NJ USA.\\
\email{[email protected]}
\and
D\'{e}partement d'informatique, \'{E}cole normale sup\'{e}rieure, Paris, France.\\
\email{[email protected]}
\and
IBM T.J. Watson Research Center, Yorktown Heights, NY USA.\\
\email{[email protected]}
}
\maketitle
\begin{abstract}
We study the problem of learning in the presence of a drifting target concept. Specifically,
we provide bounds on the error rate at a given time, given a learner with access to a history
of independent samples labeled according to a target concept that can change on each round.
One of our main contributions is a refinement of the best previous results for
polynomial-time algorithms for the space of linear separators under a uniform distribution.
We also provide general results for an algorithm capable of adapting to a variable rate of drift
of the target concept.
Some of the results also describe an active learning variant of this setting, and provide bounds on the
number of queries for the labels of points in the sequence sufficient to obtain the stated bounds
on the error rates.
\end{abstract}
\section{Introduction}
Much of the work on statistical learning has focused on
learning settings in which the concept to be learned is static
over time.
However, there are many application areas where this is not
the case. For instance, in the problem of face recognition,
the concept to be learned actually changes over time as
each individual's facial features evolve over time. In this
work, we study the problem of learning with a drifting
target concept. Specifically, we consider a statistical
learning setting, in which data arrive i.i.d. in a stream,
and for each data point, the learner is required to predict
a label for the data point at that time. We are then
interested in obtaining low error rates for these predictions.
The target labels are generated from a function known to reside
in a given concept space, and at each time $t$ the target function
is allowed to change by at most some distance $\mathcal Delta_{t}$: that is,
the probability the new target function disagrees with the previous
target function on a random sample is at most $\mathcal Delta_{t}$.
This framework has previously been studied in a number of articles.
The classic works of \cite{helmbold:91,helmbold:94,bartlett:96,long:99,bartlett:00} and \cite{barve:97}
together provide a general analysis of a
very-much related setting. Though the objectives in these works are
specified slightly differently, the results established there are
easily translated into our present framework,
and we summarize many of the relevant results from this literature
in Section~\ref{sec:background}.
While the results in these classic works are general, the best guarantees
on the error rates are only known for methods having no guarantees
of computational efficiency.
In a more recent effort, the work of \cite{min_concept} studies this problem
in the specific context of learning a homogeneous linear separator,
when all the $\mathcal Delta_{t}$ values are identical.
They propose a polynomial-time algorithm (based on the modified Perceptron
algorithm of \cite{stream_perceptron}),
and prove a bound on the number of mistakes it makes as a function of
the number of samples, when the data distribution satisfies a
certain condition called ``$\lambda$-good'' (which generalizes a useful
property of the uniform distribution on the origin-centered unit sphere).
However, their result is again worse than that obtainable by the known
computationally-inefficient methods.
Thus, the natural question is whether there exists a polynomial-time algorithm
achieving roughly the same guarantees on the error rates known for the inefficient methods.
In the present work, we resolve this question in the case of learning homogeneous
linear separators under the uniform distribution, by proposing a polynomial-time
algorithm that indeed achieves roughly the same bounds on the error rates
known for the inefficient methods in the literature.
This represents the main technical contribution of this work.
We also study the interesting problem of \emph{adaptivity} of an
algorithm to the sequence of $\mathcal Delta_{t}$ values, in the setting where
$\mathcal Delta_{t}$ may itself vary over time. Since the values $\mathcal Delta_{t}$
might typically not be accessible in practice, it seems important to
have learning methods having no explicit dependence on the sequence $\mathcal Delta_{t}$.
We propose such a method below, and prove that it achieves roughly the
same bounds on the error rates known for methods in the literature
which require direct access to the $\mathcal Delta_{t}$ values.
Also in the context of variable $\mathcal Delta_{t}$ sequences, we discuss
conditions on the sequence $\mathcal Delta_{t}$ necessary and sufficient
for there to exist a learning method guaranteeing a \emph{sublinear}
rate of growth of the number of mistakes.
We additionally study an \emph{active learning} extension to this
framework, in which, at each time, after making its prediction,
the algorithm may decide whether or not to request access to the
label assigned to the data point at that time. In addition to guarantees on the
error rates (for \emph{all} times, including those for which the label was not observed),
we are also interested in bounding the number of labels we expect the algorithm to
request, as a function of the number of samples encountered thus far.
\section{Definitions and Notation}
\label{sec:definitions}
Formally, in this setting, there is a fixed distribution $\mathcal{P}$ over the instance space $\mathcal X$,
and there is a sequence of independent $\mathcal{P}$-distributed unlabeled data $X_{1},X_{2},\ldots$.
There is also a concept space $\mathbb C$, and a sequence of target functions $h^{*}seq = \{h^{*}_{1},h^{*}_{2},\ldots\}$ in $\mathbb C$.
Each $t$ has an associated target label $Y_{t} = h^{*}_{t}(X_{t})$.
In this context, a (passive) learning algorithm is required, on each round $t$,
to produce a classifier $\hat{h}_{t}$ based on the observations $(X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1})$,
and we denote by $\hat{Y}_{t} = \hat{h}_{t}(X_{t})$ the corresponding prediction by the algorithm
for the label of $X_{t}$. For any classifier $h$, we define ${\rm er}_{t}(h) = \mathcal{P}(x : h(x) \neq h^{*}_{t}(x))$.
We also say the algorithm makes a ``mistake'' on instance $X_{t}$ if $\hat{Y}_{t} \neq Y_{t}$;
thus, ${\rm er}_{t}(\hat{h}_{t}) = \mathbb P( \hat{Y}_{t} \neq Y_{t} | (X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1}) )$.
For notational convenience, we will suppose the $h^{*}_{t}$ sequence is
chosen independently from the $X_{t}$ sequence (i.e., $h^{*}_{t}$ is chosen prior
to the ``draw'' of $X_{1},X_{2},\ldots \sim \mathcal{P}$), and is not random.
In each of our results, we will suppose $h^{*}seq$ is chosen from some set $S$ of
sequences in $\mathbb C$. In particular, we are interested in describing the sequence $h^{*}seq$
in terms of the magnitudes of \emph{changes} in $h^{*}_{t}$ from one time to the next.
Specifically, for any sequence $\mathcal Deltaseq = \{\mathcal Delta_{t}\}_{t=2}^{\infty}$ in $[0,1]$,
we denote by $S_{\mathcal Deltaseq}$ the set of all sequences $h^{*}seq$ in $\mathbb C$ such that,
$\forall t \in \mathbb{N}$, $\mathcal{P}(x : h_{t}(x) \neq h_{t+1}(x)) \leq \mathcal Delta_{t+1}$.
Throughout this article, we denote by $d$ the VC dimension of $\mathbb C$ \cite{vapnik:71},
and we suppose $\mathbb C$ is such that $1 \leq d < \infty$.
Also, for any $x \in \mathbb{R}$, define ${\rm Log}(x) = \ln(\max\{x,e\})$.
\section{Background: $(\epsilon,S)$-Tracking Algorithms}
\label{sec:background}
As mentioned, the classic literature on learning with a drifting target concept
is expressed in terms of a slightly different model. In order to relate those
results to our present setting, we first introduce the classic setting.
Specifically, we consider a model introduced by \cite{helmbold:94},
presented here in a more-general form inspired by \cite{bartlett:00}.
For a set $S$ of sequences $\{h_{t}\}_{t=1}^{\infty}$ in $\mathbb C$,
and a value $\epsilon > 0$, an algorithm $\mathcal A$ is said to be
\emph{$(\epsilon,S)$-tracking} if $\exists t_{\epsilon} \in \mathbb{N}$ such that,
for any choice of $h^{*}seq \in S$,
$\forall T \geq t_{\epsilon}$,
the prediction $\hat{Y}_{T}$ produced by $\mathcal A$ at time $T$ satisfies
\begin{equation*}
\mathbb P\left( \hat{Y}_{T} \neq Y_{T} \right) \leq \epsilon.
\end{equation*}
Note that the value of the probability in the above expression
may be influenced by $\{X_{t}\}_{t=1}^{T}$, $\{h^{*}_{t}\}_{t=1}^{T}$,
and any internal randomness of the algorithm $\mathcal A$.
The focus of the results expressed in this classical model is determining
sufficient conditions on the set $S$ for there to exist an $(\epsilon,S)$-tracking algorithm,
along with bounds on the sufficient size of $t_{\epsilon}$.
These conditions on $S$ typically take the form of an assumption on the
drift rate, expressed in terms of $\epsilon$. Below, we summarize
several of the strongest known results for this setting.
\subsection{Bounded Drift Rate}
\label{sec:classic-constant-drift}
The simplest, and perhaps most elegant, results for $(\epsilon,S)$-tracking algorithms
is for the set $S$ of sequences with a bounded drift rate. Specifically, for any $\mathcal Delta \in [0,1]$,
define $S_{\mathcal Delta} = S_{\mathcal Deltaseq}$, where $\mathcal Deltaseq$ is such that $\mathcal Delta_{t+1} = \mathcal Delta$ for every $t \in \mathbb{N}$.
The study of this problem was initiated in the original work of \cite{helmbold:94}.
The best known general results are due to \cite{long:99}: namely,
that for some $\mathcal Delta_{\epsilon} = \Theta( \epsilon^{2} / d )$,
for every $\epsilon \in (0,1]$, there exists an $(\epsilon,S_{\mathcal Delta})$-tracking algorithm for all values
of $\mathcal Delta \leq \mathcal Delta_{\epsilon}$.\footnote{In fact, \cite{long:99} also allowed the distribution
$\mathcal{P}$ to vary gradually over time. For simplicity, we will only discuss the case of fixed $\mathcal{P}$.}
This refined an earlier result of \cite{helmbold:94} by a logarithmic factor.
\cite{long:99} further argued that this result can be achieved with $t_{\epsilon} = \Theta(d/\epsilon)$.
The algorithm itself involves a beautiful modification of the one-inclusion graph prediction
strategy of \cite{haussler:94}; since its specification is somewhat involved,
we refer the interested reader to the original work of \cite{long:99} for the details.
\subsection{Varying Drift Rate: Nonadaptive Algorithm}
\label{sec:classic-varying-drift}
In addition to the concrete bounds for the case $h^{*}seq \in S_{\mathcal Delta}$,
\cite{helmbold:94} additionally present an elegant general result. Specifically,
they argue that, for any $\epsilon > 0$, and any $m = \Omega\left( \frac{d}{\epsilon}{\rm Log}\frac{1}{\epsilon} \right)$,
if $\sum_{i=1}^{m} \mathcal{P}(x : h^{*}_{i}(x) \neq h^{*}_{m+1}(x)) \leq m \epsilon / 24$, then
for $\hat{h} = \mathop{\rm argmin}_{h \in \mathbb C} \sum_{i=1}^{m} \mathbbm{1}[ h(X_{i}) \neq Y_{i} ]$,
$\mathbb P( \hat{h}(X_{m+1}) \neq h^{*}_{m+1}(X_{m+1}) ) \leq \epsilon$.\footnote{They in fact
prove a more general result, which also applies to methods approximately minimizing
the number of mistakes, but for simplicity we will only discuss this basic version of the result.}
This result immediately inspires an algorithm $\mathcal A$ which, at every time $t$,
chooses a value $m_{t} \leq t-1$, and predicts $\hat{Y}_{t} = \hat{h}_{t}(X_{t})$,
for $\hat{h}_{t} = \mathop{\rm argmin}_{h \in \mathbb C} \sum_{i=t-m_{t}}^{t-1} \mathbbm{1}[ h(X_{i}) \neq Y_{i} ]$.
We are then interested in choosing $m_{t}$ to minimize the value of $\epsilon$ obtainable
via the result of \cite{helmbold:94}. However, that method is based on the
values $\mathcal{P}( x : h^{*}_{i}(x) \neq h^{*}_{t}(x) )$, which would typically not
be accessible to the algorithm. However, suppose instead we have access to a
sequence $\mathcal Deltaseq$ such that $h^{*}seq \in S_{\mathcal Deltaseq}$.
In this case, we could approximate $\mathcal{P}( x : h^{*}_{i}(x) \neq h^{*}_{t}(x) )$
by its \emph{upper bound} $\sum_{j = i+1}^{t} \mathcal Delta_{j}$. In this case,
we are interested choosing $m_{t}$ to minimize the smallest value of $\epsilon$
such that $\sum_{i=t-m_{t}}^{t-1} \sum_{j=i+1}^{t} \mathcal Delta_{j} \leq m_{t} \epsilon / 24$
and $m_{t} = \Omega\left( \frac{d}{\epsilon} {\rm Log}\frac{1}{\epsilon} \right)$.
One can easily verify that this minimum is obtained at a value
\begin{equation*}
m_{t} = \Theta\left( \mathop{\rm argmin}_{m \leq t-1} \frac{1}{m} \sum_{i=t-m}^{t-1} \sum_{j=i+1}^{t} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)}{m} \right),
\end{equation*}
and via the result of \cite{helmbold:94} (applied to the sequence $X_{t-m_{t}},\ldots,X_{t}$)
the resulting algorithm has
\begin{equation}
\label{eqn:hl94}
\mathbb P\left( \hat{Y}_{t} \neq Y_{t} \right) \leq O\left( \min_{1 \leq m \leq t-1} \frac{1}{m} \sum_{i=t-m}^{t-1} \sum_{j=i+1}^{t} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)}{m} \right).
\end{equation}
As a special case, if every $t$ has $\mathcal Delta_{t} = \mathcal Delta$ for a fixed value $\mathcal Delta \in [0,1]$,
this result recovers the bound $\sqrt{ d \mathcal Delta {\rm Log}(1/\mathcal Delta) }$,
which is only slightly larger than that obtainable from the best bound of \cite{long:99}.
It also applies to far more general and more intersting sequences $\mathcal Deltaseq$,
including some that allow periodic large jumps (i.e., $\mathcal Delta_{t} = 1$ for some indices $t$),
others where the sequence $\mathcal Delta_{t}$ converges to $0$, and so on.
Note, however, that the algorithm obtaining this bound
directly depends on the sequence $\mathcal Deltaseq$.
One of the contributions of the present work is to remove this requirement, while
maintaining essentially the same bound, though in a slightly different form.
\subsection{Computational Efficiency}
\label{sec:classic-consistency}
\cite{helmbold:94} also proposed a reduction-based approach, which
sometimes yields computationally efficient methods, though the tolerable $\mathcal Delta$
value is smaller. Specifically, given any (randomized) polynomial-time algorithm $\mathcal A$
that produces a classifier $h \in \mathbb C$ with $\sum_{t=1}^{m} \mathbbm{1}[ h(x_{t}) \neq y_{t} ] = 0$
for any sequence $(x_1,y_1),\ldots,(x_m,y_m)$ for which such a classifier $h$ exists
(called the \emph{consistency problem}),
they propose a polynomial-time algorithm that is $(\epsilon,S_{\mathcal Delta})$-tracking
for all values of $\mathcal Delta \leq \mathcal Delta_{\epsilon}^{\prime}$,
where $\mathcal Delta_{\epsilon}^{\prime} = \Theta\left( \frac{\epsilon^{2}}{d^{2} {\rm Log}(1/\epsilon)} \right)$.
This is slightly worse (by a factor of $d {\rm Log}(1/\epsilon)$) than the drift rate tolerable by the
(typically inefficient) algorithm mentioned above.
However, it does sometimes yield computationally-efficient methods.
For instance, there are known polynomial-time algorithms for the consistency problem for the classes of
linear separators, conjunctions, and axis-aligned rectangles.
\subsection{Lower Bounds}
\label{sec:classic-lower-bound}
\cite{helmbold:94} additionally prove \emph{lower bounds} for specific concept spaces:
namely, linear separators and axis-aligned rectangles. They specifically argue that, for
$\mathbb C$ a concept space
\begin{equation*}
{\rm BASIC}_{n} = \{ \cup_{i=1}^{n} [i/n,(i+a_i)/n) : \mathbf{a} \in [0,1]^{n} \}
\end{equation*}
on $[0,1]$, under $\mathcal{P}$ the uniform distribution on $[0,1]$,
for any $\epsilon \in [0,1/e^{2}]$ and $\mathcal Delta_{\epsilon} \geq e^{4} \epsilon^{2} / n$,
for any algorithm $\mathcal A$, and any $T \in \mathbb{N}$, there exists a choice of $h^{*}seq \in S_{\mathcal Delta_{\epsilon}}$
such that the prediction $\hat{Y}_{T}$ produced by $\mathcal A$ at time $T$ satisfies
$\mathbb P\left( \hat{Y}_{T} \neq Y_{T} \right) > \epsilon$.
Based on this, they conclude that no $(\epsilon,S_{\mathcal Delta_{\epsilon}})$-tracking algorithm exists.
Furthermore, they observe that the space ${\rm BASIC}_{n}$ is embeddable in many
commonly-studied concept spaces, including halfspaces and axis-aligned
rectangles in $\mathbb{R}^{n}$, so that for $\mathbb C$ equal to either of these spaces,
there also is no $(\epsilon,S_{\mathcal Delta_{\epsilon}})$-tracking algorithm.
\section{Adapting to Arbitrarily Varying Drift Rates}
\label{sec:general}
This section presents a general bound on the error rate at each time,
expressed as a function of the rates of drift, which are allowed to be \emph{arbitrary}.
Most-importantly, in contrast to the methods from the literature discussed above,
the method achieving this general result is \emph{adaptive} to the drift rates,
so that it requires no information about the drift rates in advance. This is an
appealing property, as it essentially allows the algorithm to learn under an \emph{arbitrary}
sequence $h^{*}seq$ of target concepts; the difficulty of the task
is then simply reflected in the resulting bounds on the error rates:
that is, faster-changing sequences of target functions result in larger bounds on
the error rates, but do not require a change in the algorithm itself.
\subsection{Adapting to a Changing Drift Rate}
\label{sec:adaptive-varying-rate}
Recall that the method yielding \eqref{eqn:hl94} (based on the work of \cite{helmbold:94})
required access to the sequence $\mathcal Deltaseq$ of changes to achieve the stated guarantee
on the expected number of mistakes. That method is based on choosing a classifier to predict $\hat{Y}_{t}$
by minimizing the number of mistakes among the previous $m_{t}$ samples, where $m_{t}$ is a value
chosen based on the $\mathcal Deltaseq$ sequence. Thus, the key to modifying this algorithm to make it
adaptive to the $\mathcal Deltaseq$ sequence is to determine a suitable choice of $m_{t}$ without reference
to the $\mathcal Deltaseq$ sequence. The strategy we adopt here is to use the \emph{data} to determine
an appropriate value $\hat{m}_{t}$ to use. Roughly (ignoring logarithmic factors for now), the insight
that enables us to achieve this feat is that,
for the $m_{t}$ used in the above strategy, one can show that $\sum_{i=t-m_{t}}^{t-1} \mathbbm{1}[ h^{*}_{t}(X_{i}) \neq Y_{i} ]$
is roughly $\tilde{O}(d)$, and that
making the prediction $\hat{Y}_{t}$ with \emph{any} $h \in \mathbb C$ with roughly $\tilde{O}(d)$ mistakes
on these samples will suffice to obtain the stated bound on the error rate (up to logarithmic factors).
Thus, if we replace $m_{t}$ with the largest value $m$ for which $\min_{h \in \mathbb C} \sum_{i=t-m}^{t-1} \mathbbm{1}[ h(X_{i}) \neq Y_{i}]$
is roughly $\tilde{O}(d)$, then the above observation implies $m \geq m_{t}$. This then
implies that, for $\hat{h} = \mathop{\rm argmin}_{h \in \mathbb C} \sum_{i=t-m}^{t-1} \mathbbm{1}[ h(X_{i}) \neq Y_{i} ]$,
we have that $\sum_{i=t-m_{t}}^{t-1} \mathbbm{1}[ \hat{h}(X_{i}) \neq Y_{i} ]$ is also roughly $\tilde{O}(d)$,
so that the stated bound on the error rate will be achieved (aside from logarithmic factors)
by choosing $\hat{h}_{t}$ as this classifier $\hat{h}$.
There are a few technical modifications to this argument needed to get the logarithmic factors to work out properly,
and for this reason the actual algorithm and proof below are somewhat more involved.
Specifically, consider the following algorithm (the value of the universal constant $K \geq 1$ will be specified below).
\begin{bigboxit}
0. For $T = 1,2,\ldots$\\
1. \quad Let $\hat{m}_{T} \!=\! \max\!\left\{ m \!\in\! \{1,\ldots,T\!-\!1\} : \min\limits_{h \in \mathbb C} \max\limits_{m^{\prime} \leq m} \frac{\sum_{t=T-m^{\prime}}^{T-1} \mathbbm{1}[h(X_{t}) \neq Y_{t}]}{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)} < K \right\}$\\
2. \quad Let $\hat{h}_{T} = \mathop{\rm argmin}\limits_{h \in \mathbb C} \max\limits_{m^{\prime} \leq \hat{m}_{T}} \frac{\sum_{t=T-m^{\prime}}^{T-1} \mathbbm{1}[h(X_{t}) \neq Y_{t}]}{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}$
\end{bigboxit}
Note that the classifiers $\hat{h}_{t}$ chosen by this algorithm have no dependence on $\mathcal Deltaseq$,
or indeed anything other than the data $\{(X_{i},Y_{i}) : i < t\}$, and the concept space $\mathbb C$.
\begin{theorem}
\label{thm:epst-adaptive}
Fix any $\delta \in (0,1)$, and let $\mathcal A$ be the above algorithm.
For any sequence $\mathcal Deltaseq$ in $[0,1]$, for any $\mathcal{P}$ and any choice of $h^{*}seq \in S_{\mathcal Deltaseq}$,
for every $T \in \mathbb{N} \setminus \{1\}$, with probability at least $1-\delta$,
\begin{equation*}
{\rm er}_{T}\left( \hat{h}_{T} \right)
\leq O\left( \min_{1 \leq m \leq T-1} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m} \right).
\end{equation*}
\end{theorem}
Before presenting the proof of this result, we first state a crucial lemma, which follows immediately
from a classic result of \cite{vapnik:82,vapnik:98}, combined with the fact (from \cite{vidyasagar:03}, Theorem 4.5)
that the VC dimension of the collection of sets $\{ \{x : h(x) \neq g(x)\} : h,g \in \mathbb C \}$ is at most $10 d$.
\begin{lemma}
\label{lem:vc-ratio}
There exists a universal constant $c \in [1,\infty)$ such that,
for any class $\mathbb C$ of VC dimension $d$, $\forall m \in \mathbb{N}$, $\forall \delta \in (0,1)$,
with probability at least $1-\delta$,
every $h,g \in \mathbb C$ have
\begin{multline*}
\left| \mathcal{P}(x : h(x) \neq g(x)) - \frac{1}{m}\sum_{t=1}^{m} \mathbbm{1}[h(X_{t}) \neq g(X_{t})] \right|
\\ \leq c \sqrt{ \left(\frac{1}{m}\sum_{t=1}^{m} \mathbbm{1}[h(X_{t}) \neq g(X_{t})] \right) \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}}
\\ + c \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}.
\end{multline*}
\end{lemma}
We are now ready for the proof of Theorem~\ref{thm:epst-adaptive}.
For the constant $K$ in the algorithm, we will choose $K = 145 c^{2}$,
for $c$ as in Lemma~\ref{lem:vc-ratio}.
\begin{proof}[Proof of Theorem~\ref{thm:epst-adaptive}]
Fix any $T \in \mathbb{N}$ with $T \geq 2$, and define
\begin{multline*}
m_{T}^{*} = \max\left\{ m \in \{1,\ldots,T-1\} : \forall m^{\prime} \leq m, \phantom{\sum_{t=T-m^{\prime}}^{T-1}} \right.
\\ \left. \sum_{t=T-m^{\prime}}^{T-1} \mathbbm{1}[h^{*}_{T}(X_{t}) \neq Y_{t}] < K ( d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta) )\right\}.
\end{multline*}
Note that
\begin{equation}
\label{eqn:adaptive-target-mistakes}
\sum_{t=T-m_{T}^{*}}^{T-1} \mathbbm{1}[h^{*}_{T}(X_{t}) \neq Y_{t}] \leq K (d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)),
\end{equation}
and also note that (since $h^{*}_{T} \in \mathbb C$) $\hat{m}_{T} \geq m_{T}^{*}$, so that (by definition of $\hat{m}_{T}$ and $\hat{h}_{T}$)
\begin{equation*}
\sum_{t=T-m_{T}^{*}}^{T-1} \mathbbm{1}[\hat{h}_{T}(X_{t}) \neq Y_{t}] \leq K ( d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta) )
\end{equation*}
as well.
Therefore,
\begin{align*}
\sum_{t=T-m_{T}^{*}}^{T-1} \!\!\mathbbm{1}[h^{*}_{T}(X_{t}) \neq \hat{h}_{T}(X_{t})]
& \leq
\sum_{t=T-m_{T}^{*}}^{T-1} \!\!\mathbbm{1}[h^{*}_{T}(X_{t}) \neq Y_{t}]
+
\sum_{t=T-m_{T}^{*}}^{T-1} \!\!\mathbbm{1}[Y_{t} \neq \hat{h}_{T}(X_{t})]
\\ & \leq
2 K ( d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta) ).
\end{align*}
Thus, by Lemma~\ref{lem:vc-ratio}, for each $m \in \mathbb{N}$,
with probability at least $1-\delta / (6 m^{2})$, if $m_{T}^{*} = m$, then
\begin{equation*}
\mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x))
\leq
(2K+c \sqrt{2K} + c) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(6(m_{T}^{*})^{2}/\delta)}{m_{T}^{*}}.
\end{equation*}
Furthermore, since
${\rm Log}(6(m_{T}^{*})^{2}) \leq \sqrt{2K} d {\rm Log}(m_{T}^{*} / d)$,
this is at most
\begin{equation*}
2(K+c \sqrt{2K}) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}.
\end{equation*}
By a union bound (over values $m \in \mathbb{N}$), we have that with probability at least $1-\sum_{m=1}^{\infty} \delta/(6 m^{2}) \geq 1 - \delta/3$,
\begin{equation*}
\mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x))
\leq 2(K+c \sqrt{2K}) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}.
\end{equation*}
Let us denote
\begin{equation*}
\tilde{m}_{T} = \mathop{\rm argmin}_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}.
\end{equation*}
Note that, for any $m^{\prime} \in \{1,\ldots,T-1\}$ and $\delta \in (0,1)$,
if $\tilde{m}_{T} \geq m^{\prime}$, then
\begin{align*}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}
\\ & \geq \min_{m \in \{m^{\prime},\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j}
= \frac{1}{m^{\prime}} \sum_{i=T-m^{\prime}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j},
\end{align*}
while if $\tilde{m}_{T} \leq m^{\prime}$, then
\begin{align*}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m}
\\ & \geq \min_{m \in \{1,\ldots,m^{\prime}\}} \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
= \frac{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}{m^{\prime}}.
\end{align*}
Either way, we have that
\begin{align}
& \min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d) + {\rm Log}(1/\delta)}{m} \notag
\\ & \geq \min\left\{ \frac{d {\rm Log}(m^{\prime}/d) + {\rm Log}(1/\delta)}{m^{\prime}}, \frac{1}{m^{\prime}} \sum_{i=T-m^{\prime}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} \right\}. \label{eqn:adaptive-min-lb}
\end{align}
For any $m \in \{1,\ldots,T-1\}$,
applying Bernstein's inequality (see \cite{boucheron:13}, equation 2.10) to the random variables $\mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ]/d$, $i \in \{T-m,\ldots,T-1\}$,
and again to the random variables $-\mathbbm{1}[h^{*}_{T}(X_{i}) \neq Y_{i}]/d$, $i \in \{T-m,\ldots,T-1\}$, together with a union bound,
we obtain that, for any $\delta \in (0,1)$, with probability at least $1 - \delta / (3m^{2})$,
\begin{align}
& \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \notag
\\ & {\hskip 1cm}- \sqrt{ \left( \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \right) \frac{2\ln(3m^{2}/\delta)}{m} } \notag
\\ & < \frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ] \notag
\\ & < \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \notag
\\ & {\hskip 1cm}+ \max\begin{cases}
\sqrt{ \left( \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) ) \right) \frac{4\ln(3m^{2}/\delta)}{m} }
\\\frac{(4/3)\ln(3m^{2}/\delta)}{m} \end{cases}.\label{eqn:adaptive-empirical-ub}
\end{align}
The left inequality implies that
\begin{equation*}
\frac{1}{m} \!\sum_{i=T-m}^{T-1}\!\!\! \mathcal{P}( x \!:\! h^{*}_{T}(x) \neq h^{*}_{i}(x) )
\leq \max\left\{ \frac{2}{m} \!\sum_{i=T-m}^{T-1} \!\!\!\mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ], \frac{8\ln(3m^{2}/\delta)}{m} \right\}\!.
\end{equation*}
Plugging this into the right inequality in \eqref{eqn:adaptive-empirical-ub}, we obtain that
\begin{multline*}
\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ]
< \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) )
\\ + \max\left\{ \sqrt{ \left(\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ] \right) \frac{8\ln(3m^{2}/\delta)}{m} }, \frac{\sqrt{32}\ln(3m^{2}/\delta)}{m} \right\}.
\end{multline*}
By a union bound, this holds simultaneously for all $m \in \{1,\ldots,T-1\}$ with probability at least $1-\sum_{m = 1}^{T-1} \delta / (3m^{2}) > 1 - (2/3)\delta$.
Note that, on this event,
we obtain
\begin{multline*}
\frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) )
>
\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ]
\\ - \max\left\{ \sqrt{ \left(\frac{1}{m} \sum_{i=T-m}^{T-1} \mathbbm{1}[ h^{*}_{T}(X_{i}) \neq Y_{i} ] \right) \frac{8\ln(3m^{2}/\delta)}{m} }, \frac{\sqrt{32}\ln(3m^{2}/\delta)}{m} \right\}.
\end{multline*}
In particular, taking $m = m_{T}^{*}$, and invoking maximality of $m_{T}^{*}$, if $m_{T}^{*} < T-1$, the right hand side is at least
\begin{equation*}
(K - 6c\sqrt{K}) \frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}.
\end{equation*}
Since $\frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} \geq \frac{1}{m} \sum_{i=T-m}^{T-1} \mathcal{P}( x : h^{*}_{T}(x) \neq h^{*}_{i}(x) )$,
taking $K = 145 c^{2}$,
we have that with probability at least $1-\delta$, if $m_{T}^{*} < T-1$, then
\begin{align*}
& 10(K+c \sqrt{2K})\min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
\\ & \geq
10(K+c \sqrt{2K})\min\left\{ \frac{d {\rm Log}(m_{T}^{*}/d)+{\rm Log}(1/\delta)}{m_{T}^{*}}, \frac{1}{m_{T}^{*}} \sum_{i=T-m_{T}^{*}}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} \right\}
\\ & \geq
10(K+c \sqrt{2K})\frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}
\\ & \geq \mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x)).
\end{align*}
Furthermore, if $m_{T}^{*} = T-1$, then we trivially have (on the same $1-\delta$ probability event as above)
\begin{align*}
& 10(K+c \sqrt{2K})\min_{m \in \{1,\ldots,T-1\}} \frac{1}{m} \sum_{i=T-m}^{T-1} \sum_{j=i+1}^{T} \mathcal Delta_{j} + \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
\\ & \geq 10(K+c \sqrt{2K}) \min_{m \in \{1,\ldots,T-1\}} \frac{d {\rm Log}(m/d)+{\rm Log}(1/\delta)}{m}
\\ & = 10(K+c \sqrt{2K}) \frac{d {\rm Log}((T-1)/d)+{\rm Log}(1/\delta)}{T-1}
\\ & = 10(K+c \sqrt{2K})\frac{d {\rm Log}(m_{T}^{*}/d) + {\rm Log}(1/\delta)}{m_{T}^{*}}
\geq \mathcal{P}(x : \hat{h}_{T}(x) \neq h^{*}_{T}(x)).
\end{align*}
\qed
\end{proof}
\subsection{Conditions Guaranteeing a Sublinear Number of Mistakes}
\label{sec:sublinear}
\input{tex-files/sublinear.tex}
\section{Polynomial-Time Algorithms for Linear Separators}
\label{sec:halfspaces}
In this section, we suppose $\mathcal Delta_{t} = \mathcal Delta$ for every $t \in \mathbb{N}$, for a fixed constant $\mathcal Delta > 0$,
and we consider the special case of learning homogeneous linear separators in $\mathbb{R}^{k}$ under a uniform distribution
on the origin-centered unit sphere.
In this case, the analysis of \cite{helmbold:94} mentioned in Section~\ref{sec:classic-consistency} implies that it is possible to achieve a
bound on the error rate that is $\tilde{O}(d \sqrt{\mathcal Delta})$,
using an algorithm that runs in time ${\rm poly}(d,1/\mathcal Delta,\log(1/\delta))$ (and independent of $t$) for each prediction.
This also implies that it is possible to achieve expected number of mistakes among $T$ predictions that is $\tilde{O}(d \sqrt{\mathcal Delta}) \times T$.
\cite{min_concept}\footnote{This
work in fact studies a much broader model of drift, which in fact allows the distribution $\mathcal{P}$ to vary with time as well. However, this $\tilde{O}((d \mathcal Delta)^{1/4}) \times T$ result can be obtained
from their more-general theorem by calculating the various parameters for this particular setting.}
have since proven that a variant of the Perceptron algorithm is capable of achieving an expected number of mistakes $\tilde{O}( (d \mathcal Delta)^{1/4} ) \times T$.
Below, we improve on this result by showing that there exists an efficient algorithm that achieves a
bound on the error rate that is $\tilde{O}(\sqrt{d \mathcal Delta})$,
as was possible with the inefficient algorithm of \cite{helmbold:94,long:99} mentioned in Section~\ref{sec:classic-constant-drift}.
This leads to a bound on the expected number of mistakes that is $\tilde{O}(\sqrt{d \mathcal Delta}) \times T$.
Furthermore, our approach also allows us to present the method as an \emph{active learning}
algorithm, and to bound the expected number of queries, as a function of the
number of samples $T$, by $\tilde{O}(\sqrt{d \mathcal Delta}) \times T$.
The technique is based on a modification of the algorithm of \cite{helmbold:94},
replacing an empirical risk minimization step with (a modification of) the computationally-efficient algorithm of \cite{awasthi:13}.
Formally, define the class of homogeneous linear separators as the set of classifiers
$h_{w} : \mathbb{R}^{d} \to \{-1,+1\}$, for $w \in \mathbb{R}^{d}$ with $\|w\|=1$,
such that $h_{w}(x) = {\rm sign}( w \cdot x )$ for every $x \in \mathbb{R}^{d}$.
\subsection{An Improved Guarantee for a Polynomial-Time Algorithm}
\label{sec:efficient-linsep}
We have the following result.
\begin{theorem}
\label{thm:linsep-uniform}
When $\mathbb C$ is the space of homogeneous linear separators (with $d \geq 4$)
and $\mathcal{P}$ is the uniform distribution on the surface of
the origin-centered unit sphere in $\mathbb{R}^{d}$,
for any fixed $\mathcal Delta > 0$,
for any $\delta \in (0,1/e)$,
there is an algorithm that runs in time ${\rm poly}(d,1/\mathcal Delta,\log(1/\delta))$ for each time $t$,
such that for any $h^{*}seq \in S_{\mathcal Delta}$,
for every sufficiently large $t \in \mathbb{N}$, with probability at least $1-\delta$,
\begin{equation*}
{\rm er}_{t}(\hat{h}_{t}) = O\left( \sqrt{\mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right).
\end{equation*}
Also, running this algorithm with $\delta = \sqrt{\mathcal Delta d} \land 1/e$,
the expected number of mistakes among the first $T$ instances is
$O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } T \right)$.
Furthermore, the algorithm can be run as an \emph{active learning} algorithm,
in which case, for this choice of $\delta$, the expected number of labels
requested by the algorithm among the first $T$ instances is
$O\left( \sqrt{\mathcal Delta d} \log^{3/2}\left(\frac{1}{\mathcal Delta d}\right) T \right)$.
\end{theorem}
We first state the algorithm used to obtain this result. It is primarily based on a
margin-based learning strategy of \cite{awasthi:13}, combined with an initialization
step based on a modified Perceptron rule from \cite{stream_perceptron,min_concept}.
For $\tau > 0$ and $x \in \mathbb{R}$, define $\ell_{\tau}(x) = \max\left\{0, 1 - \frac{x}{\tau}\right\}$.
Consider the following algorithm and subroutine;
parameters $\delta_{k}$, $m_{k}$, $\tau_{k}$, $r_{k}$, $b_{k}$, $\alpha$, and $\kappa$
will all be specified in the context of the proof; we suppose $M = \sum_{k=0}^{\lceil \log_{2}(1/\alpha) \rceil} m_{k}$.
\begin{bigboxit}
Algorithm: DriftingHalfspaces\\
0. Let $\tilde{h}_{0}$ be an arbitrary classifier in $\mathbb C$\\
1. For $i = 1,2,\ldots$\\
2. \quad $\tilde{h}_{i} \gets {\rm ABL}(M (i-1), \tilde{h}_{i-1})$\\
\end{bigboxit}
\begin{bigboxit}
Subroutine: ${\rm ModPerceptron}(t,\tilde{h})$\\
0. Let $w_{t}$ be any element of $\mathbb{R}^{d}$ with $\|w_{t}\| = 1$\\
1. For $m = t+1,t+2,\ldots,t+m_{0}$\\
2. \quad Choose $\hat{h}_{m} = \tilde{h}$ (i.e., predict $\hat{Y}_{m} = \tilde{h}(X_{m})$ as the prediction for $Y_{m}$)\\
3. \quad Request the label $Y_{m}$\\
4. \quad If $h_{w_{m-1}}(X_{m}) \neq Y_{m}$\\
5. \qquad $w_{m} \gets w_{m-1} - 2(w_{m-1} \cdot X_{m}) X_{m}$\\
6. \quad Else $w_{m} \gets w_{m-1}$\\
7. Return $w_{t+m_{0}}$
\end{bigboxit}
\begin{bigboxit}
Subroutine: ${\rm ABL}(t,\tilde{h})$\\
0. Let $w_{0}$ be the return value of ${\rm ModPerceptron}(t,\tilde{h})$\\
1. For $k = 1,2,\ldots,\lceil \log_{2}(1/\alpha) \rceil$\\
2. \quad $W_{k} \gets \{\}$\\
3. \quad For $s = t + \sum_{j=0}^{k-1} m_{j} + 1, \ldots, t + \sum_{j=0}^{k} m_{j}$\\
4. \qquad Choose $\hat{h}_{s} = \tilde{h}$ (i.e., predict $\hat{Y}_{s} = \tilde{h}(X_{s})$ as the prediction for $Y_{s}$)\\
5. \qquad If $|w_{k-1} \cdot X_{s}| \leq b_{k-1}$, Request label $Y_{s}$ and let $W_{k} \gets W_{k} \cup \{(X_{s},Y_{s})\}$\\
6. \quad Find $v_{k} \in \mathbb{R}^{d}$ with $\|v_{k} - w_{k-1}\| \leq r_{k}$, $0 < \|v_{k}\| \leq 1$,
and\\ {\hskip 7mm}$\sum\limits_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (v_{k} \cdot x)) \leq \inf\limits_{v : \|v-w_{k-1}\| \leq r_{k}} \sum\limits_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (v \cdot x)) + \kappa |W_{k}|$\\
7. \quad Let $w_{k} = \frac{1}{\|v_{k}\|} v_{k}$\\
8. Return $h_{w_{\lceil \log_{2}(1/\alpha) \rceil-1}}$
\end{bigboxit}
Before stating the proof, we have a few additional lemmas that will be needed.
The following result for ${\rm ModPerceptron}$ was proven by \cite{min_concept}.
\begin{lemma}
\label{lem:perceptron}
Suppose $\mathcal Delta < \frac{1}{512}$.
Consider the values $w_{m}$ obtained during the execution of ${\rm ModPerceptron}(t,\tilde{h})$.
$\forall m \in \{t+1,\ldots, t+ m_{0}\}$, $\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m}^{*}(x)) \leq \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x))$.
Furthermore, letting $c_{1} = \frac{\pi^{2}}{d \cdot 400 \cdot 2^{15}}$, if
$\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) \geq 1/32$,
then with probability at least $1/64$,
$\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m}^{*}(x)) \leq (1 - c_{1}) \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x))$.
\end{lemma}
This implies the following.
\begin{lemma}
\label{lem:perceptron-init}
Suppose $\mathcal Delta \leq \frac{\pi^{2}}{400 \cdot 2^{27} (d+\ln(4/\delta))}$.
For $m_{0} = \max\{\lceil 128 (1/c_{1}) \ln(32) \rceil,$ $\lceil 512 \ln(\frac{4}{\delta}) \rceil \}$,
with probability at least $1-\delta/4$,
${\rm ModPerceptron}(t,\tilde{h})$ returns a vector $w$ with
$\mathcal{P}(x : h_{w}(x) \neq h_{t+m_{0}+1}^{*}(x)) \leq 1/16$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:perceptron} and a union bound, in general we have
\begin{equation}
\label{eqn:perceptron-weak-update}
\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m+1}^{*}(x)) \leq \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) + \mathcal Delta.
\end{equation}
Furthermore, if $\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) \geq 1/32$,
then wth probability at least $1/64$,
\begin{equation}
\label{eqn:perceptron-strong-update}
\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m+1}^{*}(x)) \leq (1-c_{1}) \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) + \mathcal Delta.
\end{equation}
In particular, this implies that the number $N$ of values $m \in \{t+1,\ldots,t+m_{0}\}$ with either
$\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) < 1/32$ or $\mathcal{P}(x : h_{w_{m}}(x) \neq h_{m+1}^{*}(x)) \leq (1-c_{1}) \mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) + \mathcal Delta$
is lower-bounded by a ${\rm Binomial}(m,1/64)$ random variable.
Thus, a Chernoff bound implies that with probability at least $1 - \exp\{ - m_{0} / 512 \} \geq 1 - \delta/4$,
we have $N \geq m_{0} / 128$. Suppose this happens.
Since $\mathcal Delta m_{0} \leq 1/32$, if any $m \in \{t+1,\ldots,t+m_{0}\}$ has $\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) < 1/32$,
then inductively applying \eqref{eqn:perceptron-weak-update} implies that
$\mathcal{P}(x : h_{w_{t+m_{0}}}(x) \neq h_{t+m_{0}+1}^{*}(x)) \leq 1/32 + \mathcal Delta m_{0} \leq 1/16$.
On the other hand, if all $m \in \{t+1,\ldots,t+m_{0}\}$ have $\mathcal{P}(x : h_{w_{m-1}}(x) \neq h_{m}^{*}(x)) \geq 1/32$,
then in particular we have $N$ values of $m \in \{t+1,\ldots,t+m_{0}\}$ satisfying \eqref{eqn:perceptron-strong-update}.
Combining this fact with \eqref{eqn:perceptron-weak-update} inductively, we have that
\begin{multline*}
\mathcal{P}(x : h_{w_{t+m_{0}}}(x) \neq h_{t+m_{0}+1}^{*}(x))
\leq (1-c_{1})^{N} \mathcal{P}(x : h_{w_{t}}(x) \neq h_{t+1}^{*}(x)) + \mathcal Delta m_{0}
\\ \leq (1-c_{1})^{(1/c_{1}) \ln(32) } \mathcal{P}(x : h_{w_{t}}(x) \neq h_{t+1}^{*}(x)) + \mathcal Delta m_{0}
\leq \frac{1}{32} + \mathcal Delta m_{0}
\leq \frac{1}{16}.
\end{multline*}
\qed
\end{proof}
Next, we consider the execution of ${\rm ABL}(t,\tilde{h})$, and let the sets $W_{k}$ be as in that execution.
We will denote by $w^{*}$ the weight vector with $\|w^{*}\|=1$ such that $h_{t+m_{0}+1}^{*} = h_{w^{*}}$.
Also denote by $M_{1} = M-m_{0}$.
The proof relies on a few results proven in the work of \cite{awasthi:13}, which we summarize in the following lemmas.
Although the results were proven in a slightly different setting in that work (namely, agnostic learning under a fixed joint distribution),
one can easily verify that their proofs remain valid in our present context as well.
\begin{lemma}
\label{lem:denoised-risk}
\cite{awasthi:13}
Fix any $k \in \{1,\ldots,\lceil \log_{2}(1/\alpha) \rceil\}$.
For a universal constant $c_{7} > 0$, suppose $b_{k-1} = c_{7} 2^{1-k} / \sqrt{d}$,
and let $z_{k} = \sqrt{r_{k}^{2}/(d-1) + b_{k-1}^{2}}$.
For a universal constant $c_{1} > 0$, if $\|w^{*} - w_{k-1}\| \leq r_{k}$,
\begin{multline*}
{\hskip -3mm}\left| \mathbb E\!\left[ \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(|w^{*} \cdot x|) \Big| w_{k-1}, |W_{k}| \right]
- \mathbb E\!\left[ \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (w^{*} \cdot x)) \Big| w_{k-1}, |W_{k}| \right] \right|
\\ \leq c_{1} |W_{k}| \sqrt{2^{k} \mathcal Delta M_{1}} \frac{z_{k}}{\tau_{k}}.
\end{multline*}
\end{lemma}
\begin{lemma}
\label{lem:margin-error-concentration}
\cite{balcan:13}
For any $c > 0$, there is a constant $c^{\prime} > 0$ depending only on $c$ (i.e., not depending on $d$)
such that, for any $u,v \in \mathbb{R}^{d}$ with $\|u\|=\|v\|=1$, letting $\sigma = \mathcal{P}(x : h_{u}(x) \neq h_{v}(x))$,
if $\sigma < 1/2$, then
\begin{equation*}
\mathcal{P}\left( x : h_{u}(x) \neq h_{v}(x) \text{ and } |v \cdot x| \geq c^{\prime} \frac{\sigma}{\sqrt{d}} \right) \leq c \sigma.
\end{equation*}
\end{lemma}
The following is a well-known lemma concerning concentration around the equator for the uniform distribution (see e.g., \cite{stream_perceptron,balcan:07,awasthi:13});
for instance, it easily follows from the formulas for the area in a spherical cap derived by \cite{li:11}.
\begin{lemma}
\label{lem:uniform-P-concentration}
For any constant $C > 0$, there are constants $c_{2},c_{3} > 0$ depending only on $C$ (i.e., independent of $d$) such that,
for any $w \in \mathbb{R}^{d}$ with $\|w\|=1$, $\forall \gamma \in [0, C/\sqrt{d}]$,
\begin{equation*}
c_{2} \gamma \sqrt{d} \leq \mathcal{P}\left( x : |w \cdot x| \leq \gamma \right) \leq c_{3} \gamma \sqrt{d}.
\end{equation*}
\end{lemma}
Based on this lemma, \cite{awasthi:13} prove the following.
\begin{lemma}
\label{lem:opt-margin-loss}
\cite{awasthi:13}
For $X \sim \mathcal{P}$, for any $w \in \mathbb{R}^{d}$ with $\|w\|=1$, for any $C > 0$ and $\tau, b \in [0,C/\sqrt{d}]$,
for $c_{2},c_{3}$ as in Lemma~\ref{lem:uniform-P-concentration},
\begin{equation*}
\mathbb E\left[ \ell_{\tau}( |w^{*} \cdot X| ) \Big| |w \cdot X| \leq b \right] \leq \frac{c_{3} \tau}{c_{2} b}.
\end{equation*}
\end{lemma}
The following is a slightly stronger version of a result of \cite{awasthi:13} (specifically,
the size of $m_{k}$, and consequently the bound on $|W_{k}|$, are both improved by a factor of $d$
compared to the original result).
\begin{lemma}
\label{lem:margin-error-bound}
Fix any $\delta \in (0,1/e)$.
For universal constants $c_{4},c_{5},c_{6},c_{7},c_{8},c_{9},c_{10} \in (0,\infty)$,
for an appropriate choice of $\kappa \in (0,1)$ (a universal constant),
if $\alpha = c_{9} \sqrt{\mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right)}$,
for every $k \in \{1,\ldots,\lceil \log_{2}(1/\alpha) \rceil\}$,
if $b_{k-1} = c_{7} 2^{1-k} / \sqrt{d}$, $\tau_{k} = c_{8} 2^{-k} / \sqrt{d}$, $r_{k} = c_{10} 2^{-k}$, $\delta_{k} = \delta / (\lceil \log_{2}(4/\alpha) \rceil - k)^{2}$,
and $m_{k} = \left\lceil c_{5} \frac{2^{k}}{\kappa^{2}} d \log\left(\frac{1}{\kappa\delta_{k}} \right)\right\rceil$,
and if $\mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-3}$,
then with probability at least $1-(4/3)\delta_{k}$,
$|W_{k}| \leq c_{6} \frac{1}{\kappa^{2}} d \log\left(\frac{1}{\kappa\delta_{k}}\right)$
and
$\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-4}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:uniform-P-concentration}, and a Chernoff and union bound,
for an appropriately large choice of $c_{5}$ and any $c_{7} > 0$,
letting $c_{2},c_{3}$ be as in Lemma~\ref{lem:uniform-P-concentration} (with $C=c_{7} \lor (c_{8}/2)$),
with probability at least $1-\delta_{k}/3$,
\begin{equation}
\label{eqn:Wk-bounds}
c_{2} c_{7} 2^{-k} m_{k}
\leq |W_{k}| \leq
4 c_{3} c_{7} 2^{-k} m_{k}.
\end{equation}
The claimed upper bound on $|W_{k}|$ follows from this second inequality.
Next note that, if $\mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-3}$,
then
\begin{equation*}
\max\{ \ell_{\tau_{k}}(y (w^{*} \cdot x)) : x \in \mathbb{R}^{d}, |w_{k-1} \cdot x| \leq b_{k-1}, y \in \{-1,+1\} \} \leq c_{11} \sqrt{d}
\end{equation*}
for some universal constant $c_{11} > 0$.
Furthermore, since $\mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-3}$,
we know that the angle between $w_{k-1}$ and $w^{*}$ is at most $2^{-k-3} \pi$,
so that
\begin{multline*}
\|w_{k-1} - w^{*}\|
= \sqrt{ 2 - 2 w_{k-1} \cdot w^{*} }
\leq \sqrt{ 2 - 2 \cos(2^{-k-3} \pi) }
\\ \leq \sqrt{ 2 - 2 \cos^{2}(2^{-k-3} \pi) }
= \sqrt{2} \sin(2^{-k-3} \pi) \leq 2^{-k-3} \pi \sqrt{2}.
\end{multline*}
For $c_{10} = \pi\sqrt{2} 2^{-3}$, this is $r_{k}$.
By Hoeffding's inequality (under the conditional distribution given $|W_{k}|$), the law of total probability,
Lemma~\ref{lem:denoised-risk}, and linearity of conditional expectations,
with probability at least $1-\delta_{k}/3$, for $X \sim \mathcal{P}$,
\begin{multline}
\label{eqn:opt-loss-bound}
\sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}( y ( w^{*} \cdot x) )
\leq |W_{k}| \mathbb E\left[ \ell_{\tau_{k}}(|w^{*} \cdot X|) \Big| w_{k-1}, |w_{k-1} \cdot X| \leq b_{k-1} \right]
\\ + c_{1} |W_{k}| \sqrt{2^{k} \mathcal Delta M_{1}} \frac{z_{k}}{\tau_{k}}
+ \sqrt{ |W_{k}| (1/2) c_{11}^{2} d \ln(3/\delta_{k}) }.
\end{multline}
We bound each term on the right hand side separately.
By Lemma~\ref{lem:opt-margin-loss}, the first term is at most $|W_{k}|\frac{c_{3} \tau_{k}}{c_{2} b_{k-1}} = |W_{k}|\frac{c_{3} c_{8}}{2 c_{2} c_{7}}$.
Next,
\begin{equation*}
\frac{z_{k}}{\tau_{k}}
= \frac{\sqrt{c_{10}^{2} 2^{-2k}/(d-1) + 4 c_{7}^{2} 2^{-2k}/d}}{c_{8} 2^{-k} / \sqrt{d}}
\leq \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}},
\end{equation*}
while $2^{k} \leq 2/\alpha$
so that the second term is at most
\begin{equation*}
\sqrt{2} c_{1} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}} |W_{k}| \sqrt{ \frac{\mathcal Delta m}{\alpha} }.
\end{equation*}
Noting that
\begin{equation}
\label{eqn:m-bound}
M_{1} = \sum_{k^{\prime}=1}^{\lceil \log_{2}(1/\alpha) \rceil} m_{k^{\prime}}
\leq \frac{32 c_{5}}{\kappa^{2}} \frac{1}{\alpha} d \log\left(\frac{1}{\kappa\delta}\right),
\end{equation}
we find that the second term on the right hand side of \eqref{eqn:opt-loss-bound} is at most
\begin{equation*}
\sqrt{\frac{c_{5}}{c_{9}}} \frac{8 c_{1}}{\kappa} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}} |W_{k}| \sqrt{ \frac{\mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right)}{\alpha^{2}} }
= \frac{8 c_{1} \sqrt{c_{5}}}{\kappa} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}c_{9}} |W_{k}|.
\end{equation*}
Finally, since $d \ln(3/\delta_{k}) \leq 2 d \ln(1/\delta_{k}) \leq \frac{2 \kappa^{2}}{c_{5}} 2^{-k} m_{k}$,
and \eqref{eqn:Wk-bounds} implies $2^{-k} m_{k} \leq \frac{1}{c_{2} c_{7}} |W_{k}|$,
the third term on the right hand side of \eqref{eqn:opt-loss-bound} is at most
\begin{equation*}
|W_{k}| \frac{c_{11} \kappa}{ \sqrt{c_{2} c_{5} c_{7}} }.
\end{equation*}
Altogether, we have
\begin{equation*}
\sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}( y ( w^{*} \cdot x) )
\leq |W_{k}| \left(
\frac{c_{3} c_{8}}{2 c_{2} c_{7}}
+ \frac{8 c_{1} \sqrt{c_{5}}}{\kappa} \frac{\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}}{c_{8}c_{9}}
+ \frac{c_{11} \kappa}{ \sqrt{c_{2} c_{5} c_{7}} }\right).
\end{equation*}
Taking $c_{9} = 1/\kappa^{3}$ and $c_{8} = \kappa$, this is at most
\begin{equation*}
\kappa |W_{k}| \left(
\frac{c_{3}}{2 c_{2} c_{7}}
+ 8 c_{1} \sqrt{c_{5}}\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}
+ \frac{c_{11}}{ \sqrt{c_{2} c_{5} c_{7}} }\right).
\end{equation*}
Next, note that because $h_{w_{k}}(x) \neq y \Rightarrow \ell_{\tau_{k}}(y (v_{k} \cdot x)) \geq 1$,
and because (as proven above) $\|w^{*} - w_{k-1}\| \leq r_{k}$,
\begin{equation*}
|W_{k}| {\rm er}_{W_{k}}( h_{w_{k}} )
\leq \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (v_{k} \cdot x))
\leq \sum_{(x,y) \in W_{k}} \ell_{\tau_{k}}(y (w^{*} \cdot x)) + \kappa |W_{k}|.
\end{equation*}
Combined with the above, we have
\begin{equation*}
|W_{k}| {\rm er}_{W_{k}}( h_{w_{k}} )
\leq \kappa |W_{k}| \left(
1 + \frac{c_{3}}{2 c_{2} c_{7}}
+ 8 c_{1} \sqrt{c_{5}}\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}}
+ \frac{c_{11}}{ \sqrt{c_{2} c_{5} c_{7}} }\right).
\end{equation*}
Let $c_{12} = 1 + \frac{c_{3}}{2 c_{2} c_{7}} + 8 c_{1} \sqrt{c_{5}}\sqrt{ 2c_{10}^{2} + 4 c_{7}^{2}} + \frac{c_{11}}{ \sqrt{c_{2} c_{5} c_{7}} }$.
Furthermore,
\begin{multline*}
|W_{k}|{\rm er}_{W_{k}}( h_{w_{k}} )
= \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq y ]
\\ \geq \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ] - \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w^{*}}(x) \neq y ].
\end{multline*}
For an appropriately large value of $c_{5}$,
by a Chernoff bound, with probability at least $1-\delta_{k}/3$,
\begin{equation*}
\sum_{s=t+\sum_{j=0}^{k-1}m_{j} + 1}^{t+\sum_{j=0}^{k} m_{j}} \mathbbm{1}[ h_{w^{*}}(X_{s}) \neq Y_{s} ]
\leq 2 e \mathcal Delta M_{1} m_{k} + \log_{2}(3/\delta_{k}).
\end{equation*}
In particular, this implies
\begin{equation*}
\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w^{*}}(x) \neq y ]
\leq 2 e \mathcal Delta M_{1} m_{k} + \log_{2}(3/\delta_{k}),
\end{equation*}
so that
\begin{equation*}
\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ]
\leq |W_{k}|{\rm er}_{W_{k}}( h_{w_{k}} ) + 2 e \mathcal Delta M_{1} m_{k} + \log_{2}(3/\delta_{k}).
\end{equation*}
Noting that \eqref{eqn:m-bound} and \eqref{eqn:Wk-bounds} imply
\begin{align*}
\mathcal Delta M_{1} m_{k} & \leq \mathcal Delta \frac{32 c_{5}}{\kappa^{2}} \frac{ d \log\left(\frac{1}{\kappa\delta}\right) }{c_{9} \sqrt{ \mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right)}} \frac{2^{k}}{c_{2} c_{7}} |W_{k}|
\leq \frac{32 c_{5}}{c_{2} c_{7} c_{9} \kappa^{2}} \sqrt{ \mathcal Delta d \log\left(\frac{1}{\kappa\delta}\right) } 2^{k} |W_{k}|
\\ & = \frac{32 c_{5}}{c_{2} c_{7} c_{9}^{2} \kappa^{2}} \alpha 2^{k} |W_{k}|
= \frac{32 c_{5} \kappa^{4}}{c_{2} c_{7}} \alpha 2^{k} |W_{k}|
\leq \frac{32 c_{5} \kappa^{4}}{c_{2} c_{7}} |W_{k}|,
\end{align*}
and \eqref{eqn:Wk-bounds} implies $\log_{2}(3/\delta_{k}) \leq \frac{2\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}|$,
altogether we have
\begin{align*}
\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ]
& \leq |W_{k}|{\rm er}_{W_{k}}( h_{w_{k}} ) + \frac{64 e c_{5} \kappa^{4}}{c_{2} c_{7}} |W_{k}| + \frac{2\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}|
\\ & \leq \kappa |W_{k}| \left( c_{12} + \frac{64 e c_{5} \kappa^{3}}{c_{2} c_{7}} + \frac{2\kappa}{c_{2}c_{5}c_{7}} \right).
\end{align*}
Letting $c_{13} = c_{12} + \frac{64 e c_{5}}{c_{2} c_{7}} + \frac{2}{c_{2}c_{5}c_{7}}$, and noting $\kappa \leq 1$,
we have
$\sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x) ] \leq c_{13} \kappa |W_{k}|$.
Lemma~\ref{lem:vc-ratio} (applied under the conditional distribution given $|W_{k}|$)
and the law of total probability imply that with probability at least $1-\delta_{k}/3$,
\begin{align*}
|W_{k}| &\mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq \sum_{(x,y) \in W_{k}} \mathbbm{1}[ h_{w_{k}}(x) \neq h_{w^{*}}(x)]
+ c_{14} \sqrt{ |W_{k}| (d \log(|W_{k}|/d) + \log(1/\delta_{k})) },
\end{align*}
for a universal constant $c_{14} > 0$.
Combined with the above, and the fact that \eqref{eqn:Wk-bounds} implies
$\log(1/\delta_{k}) \leq \frac{\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}|$
and
\begin{align*}
d \log(|W_{k}|/d) & \leq d \log\left(\frac{8c_{3}c_{5}c_{7} \log\left(\frac{1}{\kappa\delta_{k}}\right)}{\kappa^{2}}\right)
\\ & \leq d \log\left(\frac{8 c_{3} c_{5} c_{7}}{\kappa^{3} \delta_{k}}\right)
\leq 3\log(8 \max\{c_{3},1\} c_{5} ) c_{5} d \log\left(\frac{1}{\kappa \delta_{k}}\right)
\\ & \leq 3 \log(8 \max\{c_{3},1\}) \kappa^{2} 2^{-k} m_{k}
\leq \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} \kappa^{2} |W_{k}|,
\end{align*}
we have
\begin{align*}
|W_{k}| & \mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq c_{13} \kappa |W_{k}|
+ c_{14} \sqrt{ |W_{k}| \left( \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} \kappa^{2} |W_{k}| + \frac{\kappa^{2}}{c_{2}c_{5}c_{7}}|W_{k}| \right)}
\\ & = \kappa |W_{k}| \left( c_{13} + c_{14} \sqrt{ \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} + \frac{1}{c_{2}c_{5}c_{7}}}\right).
\end{align*}
Thus, letting $c_{15} = \left( c_{13} + c_{14} \sqrt{ \frac{3 \log(8 \max\{c_{3},1\})}{c_{2} c_{7}} + \frac{1}{c_{2}c_{5}c_{7}}}\right)$,
we have
\begin{equation}
\label{eqn:conditional-error-bound}
\mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1}\right)
\leq c_{15} \kappa.
\end{equation}
Next, note that $\|v_{k} - w_{k-1}\|^{2} = \|v_{k}\|^{2} + 1 - 2 \|v_{k}\| \cos( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) )$.
Thus, one implication of the fact that $\|v_{k} - w_{k-1}\| \leq r_{k}$ is that
$\frac{\|v_{k}\|}{2} + \frac{1-r_{k}^{2}}{2\|v_{k}\|} \leq \cos( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) )$;
since the left hand side is positive, we have $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) < 1/2$.
Additionally, by differentiating, one can easily verify that for $\phi \in [0,\pi]$,
$x \mapsto \sqrt{ x^{2} + 1 - 2 x \cos(\phi) }$ is minimized at $x=\cos(\phi)$,
in which case $\sqrt{x^{2} + 1 - 2 x \cos(\phi) } = \sin(\phi)$.
Thus, $\|v_{k} - w_{k-1}\| \geq \sin( \pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x) ) )$.
Since $\|v_{k} - w_{k-1}\| \leq r_{k}$,
we have $\sin(\pi \mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x))) \leq r_{k}$.
Since $\sin(\pi x) \geq x$ for all $x \in [0,1/2]$,
combining this with the fact (proven above) that $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) < 1/2$
implies $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) \leq r_{k}$.
In particular, we have that both $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x)) \leq r_{k}$ and $\mathcal{P}(x : h_{w^{*}}(x) \neq h_{w_{k-1}}(x)) \leq 2^{-k-3} \leq r_{k}$.
Now Lemma~\ref{lem:margin-error-concentration} implies that, for any universal constant $c > 0$,
there exists a corresponding universal constant $c^{\prime} > 0$ such that
\begin{equation*}
\mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right) \leq c r_{k}
\end{equation*}
and
\begin{equation*}
\mathcal{P}\left(x : h_{w^{*}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right) \leq c r_{k},
\end{equation*}
so that (by a union bound)
\begin{align*}
& \mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right)
\\ & \leq
\mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right)
\\ & +
\mathcal{P}\left(x : h_{w^{*}}(x) \neq h_{w_{k-1}}(x) \text{ and } |w_{k-1} \cdot x| \geq c^{\prime} \frac{r_{k}}{\sqrt{d}} \right)
\leq 2 c r_{k}.
\end{align*}
In particular, letting $c_{7} = c^{\prime} c_{10} / 2$, we have $c^{\prime} \frac{r_{k}}{\sqrt{d}} = b_{k-1}$.
Combining this with \eqref{eqn:conditional-error-bound}, Lemma~\ref{lem:uniform-P-concentration}, and a union bound, we have that
\begin{align*}
& \mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x)\right)
\\ & \leq \mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \text{ and } |w_{k-1} \cdot x| \geq b_{k-1} \right)
\\ & {\hskip 3mm}+ \mathcal{P}\left(x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \text{ and } |w_{k-1} \cdot x| \leq b_{k-1} \right)
\\ & \leq 2 c r_{k} + \mathcal{P}\left( x : h_{w_{k}}(x) \neq h_{w^{*}}(x) \Big| |w_{k-1} \cdot x| \leq b_{k-1} \right) \mathcal{P}\left(x : |w_{k-1} \cdot x| \leq b_{k-1}\right)
\\ & \leq 2 c r_{k} + c_{15} \kappa c_{3} b_{k-1} \sqrt{d}
= \left( 2^{5} c c_{10} + c_{15} \kappa c_{3} c_{7} 2^{5} \right) 2^{-k-4}.
\end{align*}
Taking $c = \frac{1}{2^{6} c_{10}}$ and $\kappa = \frac{1}{2^{6} c_{3} c_{7} c_{15}}$,
we have $\mathcal{P}(x : h_{w_{k}}(x) \neq h_{w^{*}}(x)) \leq 2^{-k-4}$, as required.
By a union bound, this occurs with probability at least $1 - (4/3)\delta_{k}$.
\qed
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:linsep-uniform}]
We begin with the bound on the error rate.
If $\mathcal Delta > \frac{\pi^{2}}{400 \cdot 2^{27} (d+\ln(4/\delta))}$, the result trivially holds, since then $1 \leq \frac{400 \cdot 2^{27}}{\pi^{2}} \sqrt{\mathcal Delta (d+\ln(4/\delta))}$.
Otherwise, suppose $\mathcal Delta \leq \frac{\pi^{2}}{400 \cdot 2^{27} (d+\ln(4/\delta))}$.
Fix any $i \in \mathbb{N}$.
Lemma~\ref{lem:perceptron-init} implies that, with probability at least $1-\delta/4$,
the $w_{0}$ returned in Step 0 of ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ satisfies
$\mathcal{P}(x : h_{w_{0}}(x) \neq h_{M(i-1) + m_{0}+1}^{*}(x)) \leq 1/16$.
Taking this as a base case, Lemma~\ref{lem:margin-error-bound} then inductively implies that,
with probability at least
\begin{multline*}
1 - \frac{\delta}{4} - \sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} (4/3) \frac{\delta}{2(\lceil \log_{2}(4/\alpha) \rceil - k)^{2}}
\geq 1 - \frac{\delta}{2} \left(1 + (4/3) \sum_{\ell=2}^{\infty} \frac{1}{\ell^{2}} \right)
\geq 1 - \delta,
\end{multline*}
every $k \in \{ 0, 1, \ldots, \lceil \log_{2}(1/\alpha) \rceil \}$ has
\begin{equation}
\label{eqn:abl-mistake-prob-raw}
\mathcal{P}(x : h_{w_{k}}(x) \neq h_{M(i-1)+m_{0}+1}^{*}(x)) \leq 2^{-k-4},
\end{equation}
and furthermore the number of labels requested during ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ total to at most (for appropriate universal constants $\hat{c}_{1},\hat{c}_{2}$)
\begin{align*}
m_{0} + \!\!\!\!\sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} |W_{k}|
& \leq \hat{c}_{1} \left(d + \ln\left(\frac{1}{\delta}\right) + \sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} d \log\left(\frac{( \lceil \log_{2}(4/\alpha) \rceil - k )^{2}}{\delta}\right) \right)
\\ & \leq \hat{c}_{2} d \log\left(\frac{1}{\mathcal Delta d}\right)\log\left(\frac{1}{\delta}\right).
\end{align*}
In particular, by a union bound, \eqref{eqn:abl-mistake-prob-raw} implies that for every $k \in \{1,\ldots,\lceil \log_{2}(1/\alpha) \rceil\}$,
every
\begin{equation*}
m \in \left\{ M(i-1) + \sum_{j=0}^{k-1} m_{j} + 1, \ldots, M(i-1) + \sum_{j=0}^{k} m_{j} \right\}
\end{equation*}
has
\begin{align*}
& \mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{m}^{*}(x))
\\ & \leq \mathcal{P}(x : h_{w_{k-1}}(x) \neq h_{M(i-1)+m_{0}+1}^{*}(x)) + \mathcal{P}(x : h_{M(i-1)+m_{0}+1}^{*}(x) \neq h_{m}^{*}(x))
\\ & \leq 2^{-k-3} + \mathcal Delta M.
\end{align*}
Thus, noting that
\begin{align*}
M & = \sum_{k=0}^{\lceil \log_{2}(1/\alpha) \rceil} m_{k}
= \Theta\left( d + \log\left(\frac{1}{\delta}\right) + \sum_{k=1}^{\lceil \log_{2}(1/\alpha) \rceil} 2^{k} d \log\left(\frac{\lceil \log_{2}(1/\alpha) \rceil - k}{\delta}\right) \right)
\\ & = \Theta\left( \frac{1}{\alpha} d \log\left(\frac{1}{\delta}\right) \right)
= \Theta\left(\sqrt{\frac{d}{\mathcal Delta} \log\left(\frac{1}{\delta}\right)} \right),
\end{align*}
with probability at least $1-\delta$,
\begin{equation*}
\mathcal{P}(x : h_{w_{\lceil \log_{2}(1/\alpha) \rceil-1}}(x) \neq h^{*}_{M i}(x) ) \leq O\left( \alpha + \mathcal Delta M \right) = O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right).
\end{equation*}
In particular, this implies that, with probability at least $1-\delta$, every $t \in \{M i + 1, \ldots, M (i+1)-1\}$ has
\begin{align*}
{\rm er}_{t}(\hat{h}_{t}) & \leq \mathcal{P}(x : h_{w_{\lceil \log_{2}(1/\alpha) \rceil-1}}(x) \neq h^{*}_{M i}(x) ) + \mathcal{P}( x : h^{*}_{M i}(x) \neq h^{*}_{t}(x) )
\\ & \leq O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right) + \mathcal Delta M
= O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right),
\end{align*}
which completes the proof of the bound on the error rate.
Setting $\delta = \sqrt{\mathcal Delta d}$, and noting that $\mathbbm{1}[ \hat{Y}_{t} \neq Y_{t} ] \leq 1$, we have that for any $t > M$,
\begin{equation*}
\mathbb P\left( \hat{Y}_{t} \neq Y_{t} \right)
= \mathbb E\left[ {\rm er}_{t}(\hat{h}_{t}) \right]
\leq O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\delta}\right) } \right) + \delta
= O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } \right).
\end{equation*}
Thus, by linearity of the expectation,
\begin{equation*}
\mathbb E\left[ \sum_{t=1}^{T} \mathbbm{1}\left[ \hat{Y}_{t} \neq Y_{t} \right] \right]
\leq M + O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } T \right)
= O\left( \sqrt{ \mathcal Delta d \log\left(\frac{1}{\mathcal Delta d}\right) } T \right).
\end{equation*}
Furthermore, as mentioned, with probability at least $1-\delta$,
the number of labels requested during the execution of ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ is at most
\begin{equation*}
O\left( d \log\left(\frac{1}{\mathcal Delta d}\right)\log\left(\frac{1}{\delta}\right) \right).
\end{equation*}
Thus, since the number of labels requested during the execution of ${\rm ABL}(M(i-1),\tilde{h}_{i-1})$ cannot exceed $M$,
letting $\delta = \sqrt{\mathcal Delta d}$, the expected number of requested labels during this execution is at most
\begin{align*}
O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \right) + \sqrt{\mathcal Delta d} M
& \leq O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \right) + O\left( d \sqrt{\log\left(\frac{1}{\mathcal Delta d}\right) } \right)
\\ & = O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \right).
\end{align*}
Thus, by linearity of the expectation, the expected number of labels requested among the first $T$ samples is at most
\begin{equation*}
O\left( d \log^{2}\left(\frac{1}{\mathcal Delta d}\right) \left\lceil \frac{T}{M} \right\rceil \right)
= O\left( \sqrt{\mathcal Delta d} \log^{3/2}\left(\frac{1}{\mathcal Delta d}\right) T \right),
\end{equation*}
which completes the proof.
\qed
\end{proof}
\paragraph{Remark:} The original work of \cite{min_concept} additionally allowed for some number $K$ of ``jumps'':
times $t$ at which $\mathcal Delta_{t} = 1$. Note that, in the above algorithm, since the influence of each sample is localized to the predictors trained
within that ``batch'' of $M$ instances, the effect of allowing such jumps would only change the bound on the number of
mistakes to $\tilde{O}\left(\sqrt{d \mathcal Delta} T + \sqrt{\frac{d}{\mathcal Delta}} K \right)$. This compares favorably to the
result of \cite{min_concept}, which is roughly $O\left( (d \mathcal Delta)^{1/4} T + \frac{d^{1/4}}{\mathcal Delta^{3/4}} K \right)$.
However, the result of \cite{min_concept} was proven for a more general setting, allowing distributions $\mathcal{P}$
that are not uniform (though they do require a relation between the angle between any two separators and the
probability mass they disagree on, similar to that holding for the uniform distribution, which seems to require that the
distributions approximately retain some properties of the uniform distribution). It is not clear whether Theorem~\ref{thm:linsep-uniform} can be
generalized to this larger family of distributions.
\section{General Results for Active Learning}
\label{sec:general-active}
As mentioned, the above results on linear separators also provide results
for the number of queries in \emph{active learning}. One can also state
quite general results on the expected number of queries and mistakes
achievable by an active learning algorithm.
This section provides such results, for an algorithm based on the
the well-known strategy of \emph{disagreement-based} active learning.
Throughout this section, we suppose $h^{*}seq \in S_{\mathcal Delta}$,
for a given $\mathcal Delta \in (0,1]$: that is, $\mathcal{P}( x : h^{*}_{t+1}(x) \neq h^{*}_{t}(x)) \leq \mathcal Delta$
for all $t \in \mathbb{N}$.
First, we introduce a few definitions.
For any set $\mathcal H \subseteq \mathbb C$, define the \emph{region of disagreement}
\begin{equation*}
\mathcal DIS(\mathcal H) = \{x \in \mathcal X : \exists h,g \in \mathcal H \text{ s.t. } h(x) \neq g(x) \}.
\end{equation*}
The analysis in this section is centered around the following algorithm.
The ${\rm Active}$ subroutine is from the work of \cite{hanneke:activized} (slightly modified here),
and is a variant of the $A^2$ (Agnostic Acive) algorithm of \cite{A2};
the appropriate values of $M$ and $\hat{T}_{k}(\cdot)$ will be discussed below.
\begin{bigboxit}
Algorithm: ${\rm DriftingActive}$\\
0. For $i = 1,2,\ldots$\\
1. \quad ${\rm Active}(M (i-1))$\\
\end{bigboxit}
\begin{bigboxit}
Subroutine: ${\rm Active}(t)$\\
0. Let $\hat{h}_{0}$ be an arbitrary element of $\mathbb C$, and let $V_{0} \gets \mathbb C$\\
1. Predict $\hat{Y}_{t+1} = \hat{h}_{0}(X_{t+1})$ as the prediction for the value of $Y_{t+1}$\\
2. For $k = 0,1,\ldots,\log_{2}(M/2)$\\
3. \quad $Q_{k} \gets \{\}$\\
4. \quad For $s = 2^{k}+1,\ldots,2^{k+1}$\\
5. \qquad Predict $\hat{Y}_{s} = \hat{h}_{k}(X_{s})$ as the prediction for the value of $Y_{s}$\\
6. \qquad If $X_{s} \in \mathcal DIS(V_{k})$\\
7. \quad\qquad Request the label $Y_{s}$ and let $Q_{k} \gets Q_{k} \cup \{(X_{s},Y_{s})\}$\\
8. \quad Let $\hat{h}_{k+1} = \mathop{\rm argmin}_{h \in V_{k}} \sum_{(x,y) \in Q_{k}} \mathbbm{1}[h(x) \neq y]$\\
9. \quad Let $V_{k+1} \gets \{h \in V_{k} : \sum_{(x,y) \in Q_{k}} \mathbbm{1}[h(x) \neq y] - \mathbbm{1}[\hat{h}_{k+1}(x) \neq y] \leq \hat{T}_{k}\}$
\end{bigboxit}
As in the ${\rm DriftingHalfspaces}$ algorithm above, this ${\rm DriftingActive}$
algorithm proceeds in batches, and in each batch runs an active learning algorithm
designed to be robust to classification noise. This robustness to classification noise
translates into our setting as tolerance for the fact that there is no classifier in $\mathbb C$
that perfectly classifies all of the data. The specific algorithm employed here maintains
a set $V_{k} \subseteq \mathbb C$ of candidate classifiers, and requests the labels of samples $X_{s}$
for which there is some disagreement on the classification among classifiers in $V_{k}$.
We maintain the invariant that there is a low-error classifier contained in $V_{k}$ at all
times, and thus the points we query provide some information to help us determine
which among these remaining candidates has low error rate. Based on these queries,
we periodically (in Step 9) remove from $V_{k}$ those classifiers making a relatively excessive
number of mistakes on the queried samples, relative to the minimum among classifiers in $V_{k}$.
All predictions are made with an element of $V_{k}$.\footnote{One could alternatively proceed
as in ${\rm DriftingHalfspaces}$, using the final classifier from the previous batch, which
would also add a guarantee on the error rate achieved at all sufficiently large $t$.}
We prove an abstract bound on the number of labels requested by this algorithm,
expressed in terms of the \emph{disagreement coefficient} \cite{hanneke:07b},
defined as follows. For any $r \geq 0$ and any classifier $h$, define ${\rm B}(h,r) = \{g \in \mathbb C : \mathcal{P}(x : g(x) \neq h(x)) \leq r\}$.
Then for $r_{0} \geq 0$ and any classifier $h$, define the disagreement coefficient of $h$ with respect to $\mathbb C$ under $\mathcal{P}$:
\begin{equation*}
\theta_{h}(r_{0}) = \sup_{r > r_{0}} \frac{ \mathcal{P}( \mathcal DIS( {\rm B}( h, r ) ) ) }{r}.
\end{equation*}
Usually, the disagreement coefficient would be used with $h$ equal the target concept;
however, since the target concept is not fixed in our setting,
we will make use of the worst-case value of the disagreement coefficient:
$\theta_{\mathbb C}(r_{0}) = \sup_{h \in \mathbb C} \theta_{h}(r_{0})$.
This quantity has been bounded for a variety of spaces $\mathbb C$ and distributions $\mathcal{P}$
(see e.g., \cite{hanneke:07b,el-yaniv:12,balcan:13}).
It is useful in bounding how quickly the region $\mathcal DIS(V_{k})$ collapses in the
algorithm. Thus, since the probability the algorithm requests the label of the next instance
is $\mathcal{P}(\mathcal DIS(V_{k}))$, the quantity $\theta_{\mathbb C}(r_{0})$ naturally arises in characterizing the
number of labels we expect this algorithm to request.
Specifically, we have the following result.\footnote{Here,
we define $\lceil x \rceil_{2} = 2^{\lceil \log_{2}(x) \rceil}$, for $x \geq 1$.}
\begin{theorem}
\label{thm:general-active}
For an appropriate universal constant $c_{1} \in [1,\infty)$,
if $h^{*}seq \in S_{\mathcal Delta}$ for some $\mathcal Delta \in (0,1]$,
then taking $M = \left\lceil c_{1} \sqrt{\frac{d}{\mathcal Delta}} \right\rceil_{2}$,
and $\hat{T}_{k} = \log_{2}(1/\sqrt{d \mathcal Delta}) + 2^{2k+2} e \mathcal Delta$,
and defining $\epsilon_{\mathcal Delta} = \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta))$,
the above ${\rm DriftingActive}$ algorithm makes an expected number of mistakes among the
first $T$ instances that is
\begin{equation*}
O\left(\epsilon_{\mathcal Delta} {\rm Log}(d/\mathcal Delta) T \right) = \tilde{O}\left( \sqrt{d\mathcal Delta} \right) T
\end{equation*}
and requests an expected number of labels among the first $T$ instances that is
\begin{equation*}
O\left( \theta_{\mathbb C}( \epsilon_{\mathcal Delta} ) \epsilon_{\mathcal Delta} {\rm Log}(d/\mathcal Delta) T \right) = \tilde{O}\left( \theta_{\mathbb C}(\sqrt{d \mathcal Delta}) \sqrt{d \mathcal Delta} \right) T.
\end{equation*}
\end{theorem}
The proof of Theorem~\ref{thm:general-active} relies on an analysis of the behavior of the ${\rm Active}$ subroutine,
characterized in the following lemma.
\begin{lemma}
\label{lem:active-subroutine}
Fix any $t \in \mathbb{N}$, and consider the values obtained in the execution of ${\rm Active}(t)$.
Under the conditions of Theorem~\ref{thm:general-active},
there is a universal constant $c_{2} \in [1,\infty)$ such that,
for any $k \in \{0,1,\ldots,\log_{2}(M/2)\}$,
with probability at least $1-2\sqrt{d \mathcal Delta}$, if
$h^{*}_{t+1} \in V_{k}$,
then $h^{*}_{t+1} \in V_{k+1}$ and
$\sup_{h \in V_{k+1}} \mathcal{P}(x : h(x) \neq h^{*}_{t+1}(x)) \leq c_{2}
2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})$.
\end{lemma}
\begin{proof}
By a Chernoff bound, with probability at least $1-\sqrt{d \mathcal Delta}$,
\begin{equation*}
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[h^{*}_{t+1}(X_{s}) \neq Y_{s}]
\leq \log_{2}(1/\sqrt{d \mathcal Delta}) + 2^{2k+2} e \mathcal Delta
= \hat{T}_{k}.
\end{equation*}
Therefore, if $h^{*}_{t+1} \in V_{k}$, then since every $g \in V_{k}$
agrees with $h^{*}_{t+1}$ on those points $X_{s} \notin \mathcal DIS(V_{k})$,
in the update in Step 9 defining $V_{k+1}$,
we have
\begin{align*}
& \sum_{(x,y) \in Q_{k}} \mathbbm{1}[h^{*}_{t+1}(x) \neq y] - \mathbbm{1}[\hat{h}_{k+1}(x) \neq y]
\\ & = \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[h^{*}_{t+1}(X_{s}) \neq Y_{s}]
- \min_{g \in V_{k}} \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[g(X_{s}) \neq Y_{s}]
\\ & \leq \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[h^{*}_{t+1}(X_{s}) \neq Y_{s}] \leq \hat{T}_{k},
\end{align*}
so that $h^{*}_{t+1} \in V_{k+1}$ as well.
Furthermore, if $h^{*}_{t+1} \in V_{k}$,
then by the definition of $V_{k+1}$,
we know every $h \in V_{k+1}$ has
\begin{equation*}
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h(X_{s}) \neq Y_{s} ]
\leq \hat{T}_{k} + \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h^{*}_{t+1}(X_{s}) \neq Y_{s} ],
\end{equation*}
so that a triangle inequality implies
\begin{align*}
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h(X_{s}) \neq h^{*}_{t+1}(X_{s}) ]
& \leq
\sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h(X_{s}) \neq Y_{s} ]
+ \mathbbm{1}[ h^{*}_{t+1}(X_{s}) \neq Y_{s} ]
\\ & \leq
\hat{T}_{k} + 2 \sum_{s=2^{k}+1}^{2^{k+1}} \mathbbm{1}[ h^{*}_{t+1}(X_{s}) \neq Y_{s} ]
\leq 3 \hat{T}_{k}.
\end{align*}
Lemma~\ref{lem:vc-ratio} then implies that, on an additional event of
probability at least $1-\sqrt{d \mathcal Delta}$,
every $h \in V_{k+1}$ has
\begin{align*}
& \mathcal{P}(x : h(x) \neq h^{*}_{t+1}(x))
\\ & \leq 2^{-k} 3\hat{T}_{k} + c 2^{-k} \sqrt{3\hat{T}_{k} (d {\rm Log}(2^{k}/d)+{\rm Log}(1/\sqrt{d\mathcal Delta}))}
\\ & \phantom{\leq } + c 2^{-k} (d {\rm Log}(2^{k}/d) + {\rm Log}(1/\sqrt{d\mathcal Delta}))
\\ & \leq
2^{-k} 3 \log_{2}(1/\sqrt{d\mathcal Delta})
+ 2^{k} 12 e \mathcal Delta
+ c 2^{-k} \sqrt{ 6 \log_{2}(1/\sqrt{d\mathcal Delta}) d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})}
\\ & \phantom{\leq } + c 2^{-k} \sqrt{ 2^{2k} 24 e \mathcal Delta d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) }
+ 2 c 2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})
\\ &
\leq
2^{-k} 3 \log_{2}(1/\sqrt{d\mathcal Delta})
+ 12 e c_{1} \sqrt{d\mathcal Delta}
+ 3 c 2^{-k} \sqrt{ d } {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})
\\ & \phantom{\leq } + \sqrt{24 e} c \sqrt{d \mathcal Delta {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) }
+ 2 c 2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}),
\end{align*}
where $c$ is as in Lemma~\ref{lem:vc-ratio}.
Since $\sqrt{d \mathcal Delta} \leq 2 c_{1} d / M \leq c_{1} d 2^{-k}$,
this is at most
\begin{equation*}
\left(5 + 12 e c_{1}^{2} + 3 c + \sqrt{24 e} c c_{1} + 2 c\right)
2^{-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}).
\end{equation*}
Letting $c_{2} = 5 + 12 e c_{1}^{2} + 3 c + \sqrt{24 e} c c_{1} + 2 c$,
we have the result by a union bound.
\qed
\end{proof}
We are now ready for the proof of Theorem~\ref{thm:general-active}.
\begin{proof}[Proof of Theorem~\ref{thm:general-active}]
Fix any $i \in \mathbb{N}$, and consider running ${\rm Active}(M(i-1))$.
Since $h^{*}_{M(i-1)+1} \in \mathbb C$,
by Lemma~\ref{lem:active-subroutine}, a union bound, and induction,
with probability at least $1-2\sqrt{d\mathcal Delta} \log_{2}(M/2)
\geq 1 - 2 \sqrt{d\mathcal Delta} \log_{2}(c_{1}\sqrt{d/\mathcal Delta})$,
every $k \in \{0,1,\ldots,\log_{2}(M/2)\}$ has
\begin{equation}
\label{eqn:general-active-radius}
\sup_{h \in V_{k}} \mathcal{P}(x : h(x) \neq h^{*}_{M(i-1)+1}(x)) \leq
c_{2} 2^{1-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}).
\end{equation}
Thus, since $\hat{h}_{k} \in V_{k}$ for each $k$,
the expected number of mistakes among the predictions
$\hat{Y}_{M(i-1)+1},\ldots,\hat{Y}_{M i}$
is
\begin{align*}
& 1 + \sum_{k=0}^{\log_{2}(M/2)} \sum_{s=2^{k}+1}^{2^{k+1}} \mathbb P(\hat{h}_{k}(X_{M(i-1)+s}) \neq Y_{M(i-1)+s})
\\ & \leq 1 + \sum_{k=0}^{\log_{2}(M/2)} \sum_{s=2^{k}+1}^{2^{k+1}}
\mathbb P(h^{*}_{M(i-1)+1}(X_{M(i-1)+s}) \neq Y_{M(i-1)+s})
\\ & \phantom{\leq } + \sum_{k=0}^{\log_{2}(M/2)} \sum_{s=2^{k}+1}^{2^{k+1}} \mathbb P(\hat{h}_{k}(X_{M(i-1)+s}) \neq h^{*}_{M(i-1)+1}(X_{M(i-1)+s}))
\\ & \leq
1 + \mathcal Delta M^{2} +
\sum_{k=0}^{\log_{2}(M/2)} 2^{k} \left( c_{2} 2^{1-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) + 2\sqrt{d\mathcal Delta}\log_{2}(M/2)\right)
\\ & \leq
1 + 4 c_{1}^{2} d + 2 c_{2} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta}) \log_{2}(2 c_{1} \sqrt{d/\mathcal Delta})
+ 4c_{1} d \log_{2}(c_{1} \sqrt{d/\mathcal Delta})
\\ & =
O\left( d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \right).
\end{align*}
Furthermore, \eqref{eqn:general-active-radius} implies the algorithm only
requests the label $Y_{M(i-1)+s}$ for $s \in \{2^{k}+1,\ldots,2^{k+1}\}$
if $X_{M(i-1)+s} \in \mathcal DIS({\rm B}(h^{*}_{M(i-1)+1}, c_{2} 2^{1-k} d {\rm Log}(c_{1} / \sqrt{d\mathcal Delta})))$,
so that the expected number of labels requested among $Y_{M(i-1)+1},\ldots,Y_{M i}$ is at most
\begin{align*}
& 1 + \sum_{k=0}^{\log_{2}(M/2)} 2^{k} \left(\mathbb E[ \mathcal{P}(\mathcal DIS({\rm B}(h^{*}_{M(i-1)+1}, c_{2} 2^{1-k} d {\rm Log}(c_{1}/\sqrt{d\mathcal Delta}))))] \right.
\\ & {\hskip 6cm}\left.+ 2 \sqrt{d\mathcal Delta} \log_{2}(c_{1}\sqrt{d/\mathcal Delta})\right)
\\ & \leq
1 + \theta_{\mathbb C}\left(4 c_{2} d {\rm Log}(c_{1}/\sqrt{d\mathcal Delta}) / M\right) 2 c_{2} d {\rm Log}(c_{2}/\sqrt{d\mathcal Delta}) \log_{2}(2 c_{1} \sqrt{d/\mathcal Delta})
\\ & {\hskip 6cm}+ 4 c_{1} d \log_{2}(c_{1}\sqrt{d/\mathcal Delta})
\\ & =
O\left( \theta_{\mathbb C}\left( \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta)) \right) d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \right).
\end{align*}
Thus, the expected number of mistakes among indices $1,\ldots,T$ is at most
\begin{equation*}
O\left( d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \left\lceil \frac{T}{M} \right\rceil \right)
= O\left( \sqrt{d\mathcal Delta} {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) T \right),
\end{equation*}
and the expected number of labels requested among indices $1,\ldots,T$ is at most
\begin{multline*}
O\left( \theta_{\mathbb C}\left( \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta)) \right) d {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) \left\lceil \frac{T}{M} \right\rceil \right)
\\ = O\left( \theta_{\mathbb C}\left( \sqrt{d\mathcal Delta} {\rm Log}(1/(d\mathcal Delta)) \right) \sqrt{d\mathcal Delta} {\rm Log}(d/\mathcal Delta) {\rm Log}(1/(d\mathcal Delta)) T \right).
\end{multline*}
\qed
\end{proof}
\end{document} |
\begin{equation}gin{document}
\date{}
\title{ON THE UNIVERSALITY OF SOME SMARANDACHE LOOPS OF BOL-MOUFANG TYPE
\footnote{2000 Mathematics Subject Classification. Primary 20NO5 ;
Secondary 08A05.}
\thanks{{\bf Keywords and Phrases :} Smarandache quasigroups, Smarandache loops, universality, $f,g$-principal isotopes}}
\author{T\`em\'it\'op\'e Gb\'ol\'ah\`an Ja\'iy\'e\d ol\'a\thanks{On Doctorate Programme at
the University of Agriculture Abeokuta, Nigeria.}
\thanks{All correspondence to be addressed to this author}\\
Department of Mathematics,\\
Obafemi Awolowo University, Ile Ife, Nigeria.\\
[email protected], [email protected]} \maketitle
\begin{equation}gin{abstract}
A Smarandache quasigroup(loop) is shown to be universal if all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
Also, weak Smarandache loops of Bol-Moufang type such as
Smarandache: left(right) Bol, Moufang and extra loops are shown to
be universal if all their $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes. Conversely, it is shown that if these weak
Smarandache loops of Bol-Moufang type are universal, then some
autotopisms are true in the weak Smarandache sub-loops of the weak
Smarandache loops of Bol-Moufang type relative to some Smarandache
elements. Futhermore, a Smarandache left(right) inverse property
loop in which all its $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes is shown to be universal if and only if it
is a Smarandache left(right) Bol loop in which all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
Also, it is established that a Smarandache inverse property loop in
which all its $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes is universal if and only if it is a
Smarandache Moufang loop in which all its $f,g$-principal isotopes
are Smarandache $f,g$-principal isotopes. Hence, some of the
autotopisms earlier mentioned are found to be true in the
Smarandache sub-loops of universal Smarandache: left(right) inverse
property loops and inverse property loops.
\end{abstract}
\section{Introduction}
W. B. Vasantha Kandasamy initiated the study of Smarandache loops
(S-loop) in 2002. In her book \cite{phd75}, she defined a
Smarandache loop (S-loop) as a loop with at least a subloop which
forms a subgroup under the binary operation of the loop called a
Smarandache subloop (S-subloop). In \cite{sma2}, the present author
defined a Smarandache quasigroup (S-quasigroup) to be a quasigroup
with at least a non-trivial associative subquasigroup called a
Smarandache subquasigroup (S-subquasigroup). Examples of Smarandache
quasigroups are given in Muktibodh \cite{muk}. For more on
quasigroups, loops and their properties, readers should check
\cite{phd3}, \cite{phd41},\cite{phd39}, \cite{phd49}, \cite{phd42}
and \cite{phd75}. In her (W.B. Vasantha Kandasamy) first paper
\cite{phd83}, she introduced Smarandache : left(right) alternative
loops, Bol loops, Moufang loops, and Bruck loops. But in
\cite{sma1}, the present author introduced Smarandache : inverse
property loops (IPL), weak inverse property loops (WIPL), G-loops,
conjugacy closed loops (CC-loop), central loops, extra loops,
A-loops, K-loops, Bruck loops, Kikkawa loops, Burn loops and
homogeneous loops. The isotopic invariance of types and varieties of
quasigroups and loops described by one or more equivalent
identities, especially those that fall in the class of Bol-Moufang
type loops as first named by Fenyves \cite{phd56} and \cite{phd50}
in the 1960s and later on in this $21^{st}$ century by Phillips and
Vojt\v echovsk\'y \cite{phd9}, \cite{phd61} and \cite{phd124} have
been of interest to researchers in loop theory in the recent past.
For example, loops such as Bol loops, Moufang loops, central loops
and extra loops are the most popular loops of Bol-Moufang type whose
isotopic invariance have been considered. Their identities relative
to quasigroups and loops have also been investigated by Kunen
\cite{ken1} and \cite{ken2}. A loop is said to be universal relative
to a property ${\cal P}$ if it is isotopic invariant relative to
${\cal P}$, hence such a loop is called a universal ${\cal P}$ loop.
This language is well used in \cite{phd88}. The universality of most
loops of Bol-Moufang types have been studied as summarised in
\cite{phd3}. Left(Right) Bol loops, Moufang loops, and extra loops
have all been found to be isotopic invariant. But some types of
central loops were shown to be universal in Ja\'iy\'e\d ol\'a
\cite{tope} and \cite{phdtope} under some conditions. Some other
types of loops such as A-loops, weak inverse property loops and
cross inverse property loops (CIPL) have been found be universal
under some neccessary and sufficient conditions in \cite{phd40},
\cite{phd43} and \cite{phd30} respectively. Recently, Michael Kinyon
et. al. \cite{phd95}, \cite{phd118}, \cite{phd119} solved the
Belousov problem concerning the universality of F-quasigroups which
has been open since 1967 by showing that all the isotopes of
F-quasigroups are Moufang loops.
In this work, the universality of the Smarandache concept in loops
is investigated. That is, will all isotopes of an S-loop be an
S-loop? The answer to this could be 'yes' since every isotope of a
group is a group (groups are G-loops). Also, the universality of
weak Smarandache loops, such as Smarandache Bol loops (SBL),
Smarandache Moufang loops (SML) and Smarandache extra loops (SEL)
will also be investigated despite the fact that it could be expected
to be true since Bol loops, Moufang loops and extra loops are
universal. The universality of a Smarandache inverse property loop
(SIPL) will also be considered.
\section{Preliminaries}
\begin{equation}gin{mydef}
A loop is called a Smarandache left inverse property loop (SLIPL) if
it has at least a non-trivial subloop with the LIP.
A loop is called a Smarandache right inverse property loop (SRIPL)
if it has at least a non-trivial subloop with the RIP.
A loop is called a Smarandache inverse property loop (SIPL) if it
has at least a non-trivial subloop with the IP.
A loop is called a Smarandache right Bol-loop (SRBL) if it has at
least a non-trivial subloop that is a right Bol(RB)-loop.
A loop is called a Smarandache left Bol-loop (SLBL) if it has at
least a non-trivial subloop that is a left Bol(LB)-loop.
A loop is called a Smarandache central-loop (SCL) if it has at least
a non-trivial subloop that is a central-loop.
A loop is called a Smarandache extra-loop (SEL) if it has at least a
non-trivial subloop that is a extra-loop.
A loop is called a Smarandache A-loop (SAL) if it has at least a
non-trivial subloop that is a A-loop.
A loop is called a Smarandache Moufang-loop (SML) if it has at least
a non-trivial subloop that is a Moufang-loop.
\end{mydef}
\begin{equation}gin{mydef}
Let $(G,\oplus)$ and $(H,\otimes)$ be two distinct quasigroups. The
triple $(A,B,C)$ such that $A,B,C~:~(G,\oplus)\rightarrow
(H,\otimes)$ are bijections is said to be an isotopism if and only
if
\begin{equation}gin{displaymath}
xA\otimes yB=(x\oplus y)C~\forall~x,y\in G.
\end{displaymath}
Thus, $H$ is called an isotope of $G$ and they are said to be
isotopic. If $C=I$, then the triple is called a principal isotopism
and $(H,\otimes)=(G,\otimes )$ is called a principal isotope of
$(G,\oplus )$. If in addition, $A=R_g$, $B=L_f$, then the triple is
called an $f,g$-principal isotopism, thus $(G,\otimes )$ is reffered
to as the $f,g$-principal isotope of $(G,\oplus )$.
A subloop(subquasigroup) $(S,\otimes )$ of a loop(quasigroup)
$(G,\otimes )$ is called a Smarandache $f,g$-principal isotope of
the subloop(subquasigroup) $(S,\oplus )$ of a loop(quasigroup)
$(G,\oplus )$ if for some $f,g\in S$,
\begin{equation}gin{displaymath}
xR_g\otimes yL_f=(x\oplus y)~\forall~x,y\in S.
\end{displaymath}
On the other hand $(G,\otimes )$ is called a Smarandache
$f,g$-principal isotope of $(G,\oplus )$ if for some $f,g\in S$,
\begin{equation}gin{displaymath}
xR_g\otimes yL_f=(x\oplus y)~\forall~x,y\in G
\end{displaymath}
where $(S,\oplus )$ is a S-subquasigroup(S-subloop) of $(G,\oplus
)$. In these cases, $f$ and $g$ are called Smarandache
elements(S-elements).
\end{mydef}
\begin{equation}gin{myth}\leftarrowbel{1:1}(\cite{phd41})
Let $(G,\oplus)$ and $(H,\otimes)$ be two distinct isotopic
loops(quasigroups). There exists an $f,g$-principal isotope
$(G,\circ )$ of $(G,\oplus)$ such that $(H,\otimes)\cong (G,\circ
)$.
\end{myth}
\begin{equation}gin{mycor}\leftarrowbel{1:2}
Let ${\cal P}$ be an isotopic invariant property in
loops(quasigroups). If $(G,\oplus)$ is a loop(quasigroup) with the
property ${\cal P}$, then $(G,\oplus)$ is a universal
loop(quasigroup) relative to the property ${\cal P}$ if and only if
every $f,g$-principal isotope $(G,\circ )$ of $(G,\oplus)$ has the
property ${\cal P}$.
\end{mycor}
{\bf Proof}\\
If $(G,\oplus)$ is a universal loop relative to the property ${\cal
P}$ then every distinct loop isotope $(H,\otimes)$ of $(G,\oplus)$
has the property ${\cal P}$. By Theorem~\ref{1:1}, there exists an
$f,g$-principal isotope $(G,\circ )$ of $(G,\oplus)$ such that
$(H,\otimes)\cong (G,\circ )$. Hence, since ${\cal P}$ is an
isomorphic invariant property, every $(G,\circ )$ has it.\\
Conversely, if every $f,g$-principal isotope $(G,\circ )$ of
$(G,\oplus)$ has the property ${\cal P}$ and since by
Theorem~\ref{1:1} for each distinct isotope $(H,\otimes)$ there
exists an $f,g$-principal isotope $(G,\circ )$ of $(G,\oplus)$ such
that $(H,\otimes)\cong (G,\circ )$, then all $(H,\otimes)$ has the
property, Thus, $(G,\oplus)$ is a universal loop relative to the
property ${\cal P}$.
\begin{equation}gin{mylem}\leftarrowbel{1:3}
Let $(G,\oplus)$ be a loop(quasigroup) with a subloop(subquasigroup)
$(S,\oplus )$. If $(G,\circ )$ is an arbitrary $f,g$-principal
isotope of $(G,\oplus)$, then $(S,\circ )$ is a
subloop(subquasigroup) of $(G,\circ)$ if $(S,\circ )$ is a
Smarandache $f,g$-principal isotope of $(S,\oplus )$.
\end{mylem}
{\bf Proof}\\
If $(S,\circ )$ is a Smarandache $f,g$-principal isotope of
$(S,\oplus )$, then for some $f,g\in S$,
\begin{equation}gin{displaymath}
xR_g\circ yL_f=(x\oplus y)~\forall~x,y\in SI\!\!Rightarrow x\circ
y=xR_g^{-1}\oplus yL_f^{-1}\in S~\forall~x,y\in S
\end{displaymath}
since $f,g\in S$. So, $(S,\circ )$ is a subgroupoid of $(G,\circ )$.
$(S,\circ )$ is a subquasigroup follows from the fact that
$(S,\oplus )$ is a subquasigroup. $f\oplus g$ is a two sided
identity element in $(S,\circ )$. Thus, $(S,\circ )$ is a subloop of
$(G,\circ )$.
\section{Main Results}
\subsection*{Universality of Smarandache Loops}
\begin{equation}gin{myth}\leftarrowbel{1:4}
A Smarandache quasigroup is universal if all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a Smarandache quasigroup with a S-subquasigroup
$(S,\oplus )$. If $(G,\circ )$ is an arbitrary $f,g$-principal
isotope of $(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a
subquasigroup of $(G,\circ)$ if $(S,\circ )$ is a Smarandache
$f,g$-principal isotope of $(S,\oplus )$. Let us choose all
$(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath} It shall now be shown that
\begin{equation}gin{displaymath}(x\circ y)\circ z=x\circ (y\circ z)~\forall~x,y,z\in
S.
\end{displaymath}
But in the quasigroup $(G,\oplus )$, $xy$ will have preference over
$x\oplus y~\forall~x,y\in G$.
\begin{equation}gin{displaymath}
(x\circ y)\circ z=(xR_g^{-1}\oplus yL_f^{-1})\circ z=(xg^{-1}\oplus
f^{-1}y)\circ z=(xg^{-1}\oplus f^{-1}y)R_g^{-1}\oplus
zL_f^{-1}
\end{displaymath}
\begin{equation}gin{displaymath}
=(xg^{-1}\oplus f^{-1}y)g^{-1}\oplus f^{-1}z=xg^{-1}\oplus
f^{-1}yg^{-1}\oplus f^{-1}z.
\end{displaymath}
\begin{equation}gin{displaymath}
x\circ (y\circ z)=x\circ (yR_g^{-1}\oplus zL_f^{-1})=x\circ
(yg^{-1}\oplus f^{-1}z)=xR_g^{-1}\oplus (yg^{-1}\oplus
f^{-1}z)L_f^{-1}
\end{displaymath}
\begin{equation}gin{displaymath}
=xg^{-1}\oplus f^{-1}(yg^{-1}\oplus
f^{-1}z)=xg^{-1}\oplus f^{-1}yg^{-1}\oplus f^{-1}z.
\end{displaymath}
Thus, $(S,\circ )$ is an S-subquasigroup of $(G,\circ )$ hence,
$(G,\circ )$ is a S-quasigroup. By Theorem~\ref{1:1}, for any
isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a $(G,\circ )$
such that $(H,\otimes )\cong (G,\circ )$. So we can now choose the
isomorphic image of $(S,\circ)$ which will now be an S-subquasigroup
in $(H,\otimes )$. So, $(H,\otimes )$ is an S-quasigroup. This
conclusion can also be drawn straight from Corollary~\ref{1:2}.
\begin{equation}gin{myth}\leftarrowbel{1:5}
A Smarandache loop is universal if all its $f,g$-principal isotopes
are Smarandache $f,g$-principal isotopes. Conversely, if a
Smarandache loop is universal then
\begin{equation}gin{displaymath}
(I,L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath} is an autotopism of an S-subloop of the S-loop such that $f$ and $g$ are S-elements.
\end{myth}
{\bf Proof}\\
Every loop is a quasigroup. Hence, the first claim follows from
Theorem~\ref{1:4}. The proof of the converse is as follows. If a
Smarandache loop $(G,\oplus )$ is universal then every isotope
$(H,\otimes)$ is an S-loop i.e there exists an S-subloop $(S,\otimes
)$ in $(H,\otimes )$. Let $(G,\circ )$ be the $f,g$-principal
isotope of $(G,\oplus)$, then by Corollary~\ref{1:2}, $(G,\circ)$ is
an S-loop with say an S-subloop $(S,\circ)$. So,
\begin{equation}gin{displaymath}
(x\circ y)\circ z=x\circ (y\circ z)~\forall~x,y,z\in S
\end{displaymath}
where \begin{equation}gin{displaymath} x\circ y=xR_g^{-1}\oplus
yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus zL_f^{-1}=xR_g^{-1}\oplus
(yR_g^{-1}\oplus zL_f^{-1})L_f^{-1}. \end{displaymath} Replacing
$xR_g^{-1}$ by $x'$, $yL_f^{-1}$ by $y'$ and taking $z=e$ in
$(S,\oplus)$ we have; \begin{equation}gin{displaymath} (x'\oplus
y')R_g^{-1}R_{f^\rho}=x'\oplus
y'L_fR_g^{-1}R_{f^\rho}L_f^{-1}I\!\!Rightarrow (I,L_fR_
g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}) \end{displaymath} is an
autotopism of an S-subloop $(S,\oplus )$ of the S-loop $(G,\oplus )$
such that $f$ and $g$ are S-elements.
\subsection*{Universality of Smarandache Bol, Moufang and Extra Loops}
\begin{equation}gin{myth}\leftarrowbel{1:6}
A Smarandache right(left)Bol loop is universal if all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
Conversely, if a Smarandache right(left)Bol loop is universal then
\begin{equation}gin{displaymath}
{\cal
T}_1=(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})\bigg({\cal
T}_2=(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})\bigg)
\end{displaymath}
is an autotopism of an SRB(SLB)-subloop of the SRBL(SLBL) such that
$f$ and $g$ are S-elements.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SRBL(SLBL) with a S-RB(LB)-subloop $(S,\oplus
)$. If $(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
It is already known from \cite{phd3} that RB(LB) loops are
universal, hence $(S,\circ )$ is a RB(LB) loop thus an
S-RB(LB)-subloop of $(G,\circ)$. By Theorem~\ref{1:1}, for any
isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a $(G,\circ )$
such that $(H,\otimes )\cong (G,\circ )$. So we can now choose the
isomorphic image of $(S,\circ)$ which will now be an
S-RB(LB)-subloop in $(H,\otimes )$. So, $(H,\otimes )$ is an
SRBL(SLBL). This conclusion can also be drawn straight from
Corollary~\ref{1:2}.
The proof of the converse is as follows. If a SRBL(SLBL) $(G,\oplus
)$ is universal then every isotope $(H,\otimes)$ is an SRBL(SLBL)
i.e there exists an S-RB(LB)-subloop $(S,\otimes )$ in $(H,\otimes
)$. Let $(G,\circ )$ be the $f,g$-principal isotope of $(G,\oplus)$,
then by Corollary~\ref{1:2}, $(G,\circ)$ is an SRBL(SLBL) with say
an SRB(SLB)-subloop $(S,\circ)$. So for an SRB-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[(y\circ x)\circ z]\circ x=y\circ [(x\circ z)\circ
x]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(yR_g^{-1}\oplus xL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=yR_g^{-1}\oplus [(xR_g^{-1}\oplus zL_f^{-1})R_g^{-1}\oplus
xL_f^{-1}]L_f^{-1}. \end{displaymath} Replacing $yR_g^{-1}$ by $y'$,
$zL_f^{-1}$ by $z'$ and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_{f^\rho}R_g^{-1}\oplus z')R_g^{-1}R_{f^\rho}=y'\oplus
z'L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1}. \end{displaymath} Again,
replace $y'R_{f^\rho}R_g^{-1}$ by $y''$ so that
\begin{equation}gin{displaymath}
(y''\oplus z')R_g^{-1}R_{f^\rho}=y''R_gR_{f^\rho}^{-1}\oplus
z'L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1}I\!\!Rightarrow
(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SRB-subloop $(S,\oplus )$ of the S-loop $(G,\oplus )$ such that $f$ and $g$ are S-elements.\\
On the other hand, for a SLB-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[x\circ (y\circ x)]\circ z=x\circ [y\circ (x\circ
z)]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[xR_g^{-1}\oplus (yR_g^{-1}\oplus xL_f^{-1})L_f^{-1}]R_g^{-1}\oplus
zL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus z'=(y'\oplus
z'L_{g^\leftarrowmbda}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath} Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$
so that
\begin{equation}gin{displaymath}
y'R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SLB-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
\begin{equation}gin{myth}\leftarrowbel{1:7}
A Smarandache Moufang loop is universal if all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes. Conversely, if a
Smarandache Moufang loop is universal then
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_{g^\leftarrowmbda}^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho})
\end{displaymath}
are autotopisms of an SM-subloop of the SML such that $f$ and $g$
are S-elements.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SML with a SM-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
It is already known from \cite{phd3} that Moufang loops are
universal, hence $(S,\circ )$ is a Moufang loop thus an SM-subloop
of $(G,\circ)$. By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$
of $(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ which will now be an SM-subloop in $(H,\otimes )$. So,
$(H,\otimes )$ is an SML. This conclusion can also be drawn straight
from Corollary~\ref{1:2}.
The proof of the converse is as follows. If a SML $(G,\oplus )$ is
universal then every isotope $(H,\otimes)$ is an SML i.e there
exists an SM-subloop $(S,\otimes )$ in $(H,\otimes )$. Let $(G,\circ
)$ be the $f,g$-principal isotope of $(G,\oplus)$, then by
Corollary~\ref{1:2}, $(G,\circ)$ is an SML with say an SM-subloop
$(S,\circ)$. For an SM-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
(x\circ y)\circ (z\circ x)=[x\circ (y\circ z)]\circ
x~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}=[xR_g^{-1}\oplus (yR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]R_g^{-1}\oplus xL_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z')L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}I\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Again, for an SM-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
(x\circ y)\circ (z\circ x)=x\circ [(y\circ z)\circ x]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}=xR_g^{-1}\oplus [(yR_g^{-1}\oplus
zL_f^{-1})R_g^{-1}\oplus xL_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z')R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Also, if $(S,\circ)$ is an SM-subloop then,
\begin{equation}gin{displaymath}
[(x\circ y)\circ x]\circ z=x\circ [y\circ (x\circ z)]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus xL_f^{-1}]R_g^{-1}\oplus
zL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1}\oplus
z'=(y'\oplus z'L_{g^\leftarrowmbda}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Furthermore, if $(S,\circ)$ is an SM-subloop then,
\begin{equation}gin{displaymath}
[(y\circ x)\circ z]\circ x=y\circ [x\circ (z\circ x)]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(yR_g^{-1}\oplus xL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=yR_g^{-1}\oplus [xR_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_{f^\rho}R_g^{-1}\oplus z')R_g^{-1}R_{f^\rho}=y'\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1}.
\end{displaymath}
Again, replace $y'R_{f^\rho}R_g^{-1}$ by $y''$ so that
\begin{equation}gin{displaymath}
(y''\oplus z')R_g^{-1}R_{f^\rho}=y''R_gR_{f^\rho}^{-1}\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1}I\!\!Rightarrow
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SM-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Lastly, $(S,\oplus)$ is an SM-subloop if and only if $(S,\circ)$ is
an SRB-subloop and an SLB-subloop. So by Theorem~\ref{1:6}, ${\cal
T}_1$ and ${\cal T}_2$ are autotopisms in $(S,\oplus)$, hence ${\cal
T}_1{\cal T}_2$ and ${\cal T}_2{\cal T}_1$ are autotopisms in
$(S,\oplus)$.
\begin{equation}gin{myth}\leftarrowbel{1:8}
A Smarandache extra loop is universal if all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes. Conversely, if a
Smarandache extra loop is universal then
$(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_{f^\rho}^{-1}R_gL_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_g)$,
\begin{equation}gin{displaymath}
(R_gR_{f^\rho}^{-1}R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1}L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_{g^\leftarrowmbda}^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
\end{displaymath}
are autotopisms of an SE-subloop of the SEL such that $f$ and $g$
are S-elements.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SEL with a SE-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
In \cite{phd34} and \cite{phd36} respectively, it was shown and
stated that a loop is an extra loop if and only if it is a Moufang
loop and a CC-loop. But since CC-loops are G-loops(they are
isomorphic to all loop isotopes) then extra loops are universal,
hence $(S,\circ )$ is an extra loop thus an SE-subloop of
$(G,\circ)$. By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ which will now be an SE-subloop in $(H,\otimes )$. So,
$(H,\otimes )$ is an SEL. This conclusion can also be drawn straight
from Corollary~\ref{1:2}.
The proof of the converse is as follows. If a SEL $(G,\oplus )$ is
universal then every isotope $(H,\otimes)$ is an SEL i.e there
exists an SE-subloop $(S,\otimes )$ in $(H,\otimes )$. Let $(G,\circ
)$ be the $f,g$-principal isotope of $(G,\oplus)$, then by
Corollary~\ref{1:2}, $(G,\circ)$ is an SEL with say an SE-subloop
$(S,\circ)$. For an SE-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
[(x\circ y)\circ z]\circ x=x\circ [y\circ (z\circ
x)]~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
[(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]R_g^{-1}\oplus
xL_f^{-1}=xR_g^{-1}\oplus [yR_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
(y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z')R_g^{-1}R_{f^\rho}=(y'\oplus
z'L_fR_g^{-1}R_{f^\rho}L_f^{-1})L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z''L_fR_{f^\rho}^{-1}R_gL_f^{-1}=(y'\oplus
z'')L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_gI\!\!Rightarrow
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_{f^\rho}^{-1}R_gL_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1}R_g)
\end{displaymath}
is an autotopism of an SE-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Again, for an SE-subloop $(S,\circ)$,
\begin{equation}gin{displaymath}
(x\circ y)\circ (x\circ z)=x\circ [(y\circ x)\circ z]
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(xR_g^{-1}\oplus yL_f^{-1})R_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}=xR_g^{-1}\oplus [(yR_g^{-1}\oplus
xL_f^{-1})R_g^{-1}\oplus zL_f^{-1}]L_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_{g^\leftarrowmbda}L_f^{-1}=(y'R_{f^\rho}R_g^{-1}\oplus
z')L_f^{-1}L_{g^\leftarrowmbda}.
\end{displaymath}
Again, replace $y'R_{f^\rho}R_g^{-1}$ by $y''$ so that
\begin{equation}gin{displaymath}
y''R_gR_{f^\rho}^{-1}R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}\oplus
z'L_{g^\leftarrowmbda}L_f^{-1}=(y''\oplus
z')L_f^{-1}L_{g^\leftarrowmbda}I\!\!Rightarrow
(R_gR_{f^\rho}^{-1}R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda})
\end{displaymath}
is an autotopism of an SE-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Also, if $(S,\circ)$ is an SE-subloop then,
\begin{equation}gin{displaymath}
(y\circ x)\circ (z\circ x)=[y\circ (x\circ z)]\circ x
~\forall~x,y,z\in S\end{displaymath} where
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
Thus,
\begin{equation}gin{displaymath}
(yR_g^{-1}\oplus xL_f^{-1})R_g^{-1}\oplus (zR_g^{-1}\oplus
xL_f^{-1})L_f^{-1}= [(yR_g^{-1}\oplus (xR_g^{-1}\oplus
zL_f^{-1})L_f^{-1}]R_g^{-1}\oplus xL_f^{-1}.
\end{displaymath} Replacing $yR_g^{-1}$ by $y'$, $zL_f^{-1}$ by $z'$
and taking $x=e$ in $(S,\oplus)$ we have
\begin{equation}gin{displaymath}
y'R_{f^\rho}R_g^{-1}\oplus z'L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z'L_{g^\leftarrowmbda}L_f^{-1})R_g^{-1}R_{f^\rho}.
\end{displaymath}
Again, replace $z'L_{g^\leftarrowmbda}L_f^{-1}$ by $z''$ so that
\begin{equation}gin{displaymath}
y'R_{f^\rho}R_g^{-1}\oplus
z''L_fL_{g^\leftarrowmbda}^{-1}L_fR_g^{-1}R_{f^\rho}L_f^{-1}=(y'\oplus
z')R_g^{-1}R_{f^\rho}I\!\!Rightarrow
(R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1}L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})
\end{displaymath}
is an autotopism of an SE-subloop $(S,\oplus )$ of the S-loop
$(G,\oplus )$ such that $f$ and $g$ are S-elements.
Lastly, $(S,\oplus)$ is an SE-subloop if and only if $(S,\circ)$ is
an SM-subloop and an SCC-subloop. So by Theorem~\ref{1:7}, the six
remaining triples are autotopisms in $(S,\oplus)$.
\subsection*{Universality of Smarandache Inverse Property Loops}
\begin{equation}gin{myth}\leftarrowbel{1:9}
A Smarandache left(right) inverse property loop in which all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes is
universal if and only if it is a Smarandache left(right) Bol loop in
which all its $f,g$-principal isotopes are Smarandache
$f,g$-principal isotopes.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SLIPL with a SLIP-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
$(G,\oplus)$ is a universal SLIPL if and only if every isotope
$(H,\otimes )$ is a SLIPL. $(H,\otimes )$ is a SLIPL if and only if
it has at least a SLIP-subloop $(S,\otimes )$. By Theorem~\ref{1:1},
for any isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a
$(G,\circ )$ such that $(H,\otimes )\cong (G,\circ )$. So we can now
choose the isomorphic image of $(S,\circ)$ to be $(S,\otimes )$
which is already a SLIP-subloop in $(H,\otimes )$. So, $(S,\circ)$
is also a SLIP-subloop in $(G,\circ )$. As shown in \cite{phd3},
$(S,\oplus )$ and its $f,g$-isotope(Smarandache $f,g$-isotope)
$(S,\circ)$ are SLIP-subloops if and only if $(S,\oplus )$ is a left
Bol subloop(i.e a SLB-subloop). So, $(G,\oplus)$ is SLBL.
Conversely, if $(G,\oplus)$ is SLBL, then there exists a
SLB-subloop $(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an
arbitrary $f,g$-principal isotope of $(G,\oplus)$, then by
Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of $(G,\circ)$ if
$(S,\circ )$ is a Smarandache $f,g$-principal isotope of $(S,\oplus
)$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SLB-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SLB-subloop in $(G,\circ )$. Left Bol loops have
the left inverse property(LIP), hence, $(S,\oplus )$ and $(S,\circ)$
are SLIP-subloops in $(G,\oplus)$ and $(G,\circ )$ respectively.
Thence, $(G,\oplus)$ and $(G,\circ )$ are SLBLs. Therefore,
$(G,\oplus)$ is
a universal SLIPL.\\
The proof for a Smarandache right inverse property loop is similar
and is as follows. Let $(G,\oplus)$ be a SRIPL with a SRIP-subloop
$(S,\oplus )$. If $(G,\circ )$ is an arbitrary $f,g$-principal
isotope of $(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a
subloop of $(G,\circ)$ if $(S,\circ )$ is a Smarandache
$f,g$-principal isotope of $(S,\oplus )$. Let us choose all
$(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
$(G,\oplus)$ is a universal SRIPL if and only if every isotope
$(H,\otimes )$ is a SRIPL. $(H,\otimes )$ is a SRIPL if and only if
it has at least a SRIP-subloop $(S,\otimes )$. By Theorem~\ref{1:1},
for any isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a
$(G,\circ )$ such that $(H,\otimes )\cong (G,\circ )$. So we can now
choose the isomorphic image of $(S,\circ)$ to be $(S,\otimes )$
which is already a SRIP-subloop in $(H,\otimes )$. So, $(S,\circ)$
is also a SRIP-subloop in $(G,\circ )$. As shown in \cite{phd3},
$(S,\oplus )$ and its $f,g$-isotope(Smarandache $f,g$-isotope)
$(S,\circ)$ are SRIP-subloops if and only if $(S,\oplus )$ is a
right Bol subloop(i.e a SRB-subloop). So, $(G,\oplus)$ is SRBL.
Conversely, if $(G,\oplus)$ is SRBL, then there exists a
SRB-subloop $(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an
arbitrary $f,g$-principal isotope of $(G,\oplus)$, then by
Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of $(G,\circ)$ if
$(S,\circ )$ is a Smarandache $f,g$-principal isotope of $(S,\oplus
)$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SRB-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SRB-subloop in $(G,\circ )$. Right Bol loops have
the right inverse property(RIP), hence, $(S,\oplus )$ and
$(S,\circ)$ are SRIP-subloops in $(G,\oplus)$ and $(G,\circ )$
respectively. Thence, $(G,\oplus)$ and $(G,\circ )$ are SRBLs.
Therefore, $(G,\oplus)$ is a universal SRIPL.
\begin{equation}gin{myth}\leftarrowbel{1:10}
A Smarandache inverse property loop in which all its $f,g$-principal
isotopes are Smarandache $f,g$-principal isotopes is universal if
and only if it is a Smarandache Moufang loop in which all its
$f,g$-principal isotopes are Smarandache $f,g$-principal isotopes.
\end{myth}
{\bf Proof}\\
Let $(G,\oplus)$ be a SIPL with a SIP-subloop $(S,\oplus )$. If
$(G,\circ )$ is an arbitrary $f,g$-principal isotope of
$(G,\oplus)$, then by Lemma~\ref{1:3}, $(S,\circ )$ is a subloop of
$(G,\circ)$ if $(S,\circ )$ is a Smarandache $f,g$-principal isotope
of $(S,\oplus )$. Let us choose all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
$(G,\oplus)$ is a universal SIPL if and only if every isotope
$(H,\otimes )$ is a SIPL. $(H,\otimes )$ is a SIPL if and only if it
has at least a SIP-subloop $(S,\otimes )$. By Theorem~\ref{1:1}, for
any isotope $(H,\otimes )$ of $(G,\oplus)$, there exists a $(G,\circ
)$ such that $(H,\otimes )\cong (G,\circ )$. So we can now choose
the isomorphic image of $(S,\circ)$ to be $(S,\otimes )$ which is
already a SIP-subloop in $(H,\otimes )$. So, $(S,\circ)$ is also a
SIP-subloop in $(G,\circ )$. As shown in \cite{phd3}, $(S,\oplus )$
and its $f,g$-isotope(Smarandache $f,g$-isotope) $(S,\circ)$ are
SIP-subloops if and only if $(S,\oplus )$ is a Moufang subloop(i.e a
SM-subloop). So, $(G,\oplus)$ is SML.
Conversely, if $(G,\oplus)$ is SML, then there exists a SM-subloop
$(S,\oplus )$ in $(G,\oplus)$. If $(G,\circ )$ is an arbitrary
$f,g$-principal isotope of $(G,\oplus)$, then by Lemma~\ref{1:3},
$(S,\circ )$ is a subloop of $(G,\circ)$ if $(S,\circ )$ is a
Smarandache $f,g$-principal isotope of $(S,\oplus )$. Let us choose
all $(S,\circ )$ in this manner. So,
\begin{equation}gin{displaymath}
x\circ y=xR_g^{-1}\oplus yL_f^{-1}~\forall~x,y\in S.
\end{displaymath}
By Theorem~\ref{1:1}, for any isotope $(H,\otimes )$ of
$(G,\oplus)$, there exists a $(G,\circ )$ such that $(H,\otimes
)\cong (G,\circ )$. So we can now choose the isomorphic image of
$(S,\circ)$ to be $(S,\otimes )$ which is a SM-subloop in
$(H,\otimes )$ using the same reasoning in Theorem~\ref{1:6}. So,
$(S,\circ)$ is a SM-subloop in $(G,\circ )$. Moufang loops have the
inverse property(IP), hence, $(S,\oplus )$ and $(S,\circ)$ are
SIP-subloops in $(G,\oplus)$ and $(G,\circ )$ respectively. Thence,
$(G,\oplus)$ and $(G,\circ )$ are SMLs. Therefore, $(G,\oplus)$ is a
universal SIPL.
\begin{equation}gin{mycor}\leftarrowbel{1:11}
If a Smarandache left(right) inverse property loop is universal then
\begin{equation}gin{displaymath}
(R_gR_{f^\rho}^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho})\bigg(
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda})\bigg)
\end{displaymath}
is an autotopism of an SLIP(SRIP)-subloop of the SLIPL(SRIPL) such
that $f$ and $g$ are S-elements.
\end{mycor}
{\bf Proof}\\
This follows by Theorem~\ref{1:9} and Theorem~\ref{1:11}.
\begin{equation}gin{mycor}\leftarrowbel{1:12}
If a Smarandache inverse property loop is universal then
\begin{equation}gin{displaymath}
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}),
(R_{g}L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}R_g^{-1},L_fL_{g^\leftarrowmbda}^{-1},L_f^{-1}L_{g^\leftarrowmbda}),
(R_gR_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}L_f^{-1},R_g^{-1}R_{f^\rho}),
\end{displaymath}
\begin{equation}gin{displaymath}
(R_gL_f^{-1}L_{g^\leftarrowmbda}R_g^{-1},L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho}L_{g^\leftarrowmbda}^{-1},R_g^{-1}R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}),
(R_{f^\rho}L_f^{-1}L_{g^\leftarrowmbda}R_{f^\rho}^{-1},L_fR_g^{-1}R_{f^\rho}L_f^{-1},L_f^{-1}L_{g^\leftarrowmbda}R_g^{-1}R_{f^\rho})
\end{displaymath}
are autotopisms of an SIP-subloop of the SIPL such that $f$ and $g$
are S-elements.
\end{mycor}
{\bf Proof}\\
This follows from Theorem~\ref{1:10} and Theorem~\ref{1:7}.
\begin{equation}gin{thebibliography}{99}
\bibitem{phd30} R. Artzy (1959), {\it Crossed inverse and related
loops}, Trans. Amer. Math. Soc. 91, 3, 480--492.
\bibitem{phd41} R. H. Bruck (1966), {\it A survey of binary systems}, Springer-Verlag, Berlin-G\"ottingen-Heidelberg, 185pp.
\bibitem{phd40} R. H. Bruck and L. J. Paige (1956), {\it Loops whose
inner mappings are automorphisms}, The annuals of Mathematics, 63,
2, 308--323.
\bibitem{phd39} O. Chein, H. O. Pflugfelder and J. D. H. Smith (1990), {\it Quasigroups and loops : Theory and applications}, Heldermann Verlag, 568pp.
\bibitem{phd49} J. D\'{e}ne and A. D. Keedwell (1974), {\it Latin squares and their applications}, the Academic press Lts, 549pp.
\bibitem{phd50} F. Fenyves (1968), {\it Extra loops I}, Publ. Math. Debrecen, 15, 235--238.
\bibitem{phd56} F. Fenyves (1969), {\it Extra loops II}, Publ. Math. Debrecen, 16, 187--192.
\bibitem{phd42} E. G. Goodaire, E. Jespers and C. P. Milies (1996), {\it Alternative loop rings}, NHMS(184), Elsevier, 387pp.
\bibitem{phd34} E. G. Goodaire and D. A. Robinson (1990), {\it Some special conjugacy closed loops}, Canad. Math. Bull. 33, 73--78.
\bibitem{sma1} T. G. Ja\'iy\'e\d ol\'a (2006), {\it An holomorphic study of the Smarandache concept in
loops}, Scientia Magna Journal, 2, 1, 1--8.
\bibitem{sma2} T. G. Ja\'iy\'e\d ol\'a (2006), {\it Smarandache quasigroups}, Scientia Magna Journal, 2, 2, to appear.
\bibitem{phdtope} T. G. Ja\'iy\'e\d ol\'a (2005), {\it An isotopic study of
properties of central loops}, M.Sc. Dissertation, University of
Agriculture, Abeokuta.
\bibitem{tope} T. G. Ja\'iy\'e\d ol\'a and J. O. Ad\'en\'iran, {\it On isotopic characterization of central
loops}, communicated for publication.
\bibitem{phd118} T. Kepka, M. K. Kinyon, J. D. Phillips, {\it
F-quasigroups and generalised modules}, communicated for
publication.
\bibitem{phd119} T. Kepka, M. K. Kinyon, J. D. Phillips, {\it F-quasigroups isotopic to groups}, communicated for publication.
\bibitem{phd95} T.
Kepka, M. K. Kinyon, J. D. Phillips, {\it The structure of
F-quasigroups}, communicated for publication.
\bibitem{phd36} M. K. Kinyon, K. Kunen (2004), {\it The structure of
extra loops}, Quasigroups and Related Systems 12, 39--60.
\bibitem{phd124} M. K. Kinyon, J. D. Phillips and P. Vojt\v echovsk\'y (2004), {\it Loops of Bol-Moufang type with a subgroup of index
two}, Bull. Acad. Stinte. Rebub. Mold. Mat. 2,45, 1--17.
\bibitem{ken2} K. Kunen (1996), {\it Quasigroups, loops and associative laws}, J. Alg.185, 194--204.
\bibitem{ken1} K. Kunen (1996), {\it Moufang quasigroups}, J. Alg. 183, 231--234.
\bibitem{muk} A. S. Muktibodh (2006), {\it Smarandache quasigroups},
Scientia Magna Journal, 2, 1, 13--19.
\bibitem{phd88} P. T. Nagy and K. Strambach (1994), {\it Loops as
invariant sections in groups, and their geometry}, Canad. J. Math.
46, 5, 1027--1056.
\bibitem{phd43} J. M. Osborn (1961), {\it Loops with the weak
inverse property}, Pac. J. Math. 10, 295--304.
\bibitem{phd3} H. O. Pflugfelder (1990), {\it Quasigroups and loops : Introduction}, Sigma series in Pure Math. 7, Heldermann Verlag, Berlin, 147pp.
\bibitem{phd9} J. D. Phillips and P. Vojt\v echovsk\'y (2005), {\it The varieties of loops of Bol-Moufang type}, Alg. Univer. (to appear).
\bibitem{phd61} J. D. Phillips and P. Vojt\v echovsk\'y (2005), {\it The varieties of quasigroups of Bol-Moufang type : An equational
approach}, J. Alg. 293, 17--33.
\bibitem{phd75} W. B. Vasantha Kandasamy (2002), {\it Smarandache
loops}, Department of Mathematics, Indian Institute of Technology,
Madras, India, 128pp.
\bibitem{phd83} W. B. Vasantha Kandasamy (2002), {\it Smarandache
Loops}, Smarandache notions journal, 13, 252--258.
\end{thebibliography}
\end{document} |
"\\begin{document}\n\n\\newcounter{algnum}\n\\newcounter{step}\n\\newtheorem{alg}{Algorithm}\n\n\\ne(...TRUNCATED) |
"\\begin{document}\n\n\\begin{center} \n\n{A heuristic for the non-unicost set covering problem usin(...TRUNCATED) |
"\\betaegin{document}\n\n \\title{Differential systems with reflection\\ and matrix invariants}\n \n(...TRUNCATED) |
"\\begin{document}\n\n\\title{Discriminating between L\\\"uders and von Neumann measuring devices: (...TRUNCATED) |
"\\begin{document}\n\n\\begin{center} {Stability and bifurcation analysis of a SIR model with satur(...TRUNCATED) |
"\\begin{document}\n\n\\title{Temporal profile of biphotons generated from a hot atomic vapor and sp(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
MathPile ArXiv (subset)
Description
This dataset consists of 343,830 TeX files containing mathematics papers sourced from the arXiv. Training and testing sets are already split
Source
The data was obtained from the training + validation portion of the arXiv subset of MathPile.
Format
- Given as JSONL files of JSON dicts each containing the single key: "text"
Usage
- LaTeX stuff idk
License
The original data is subject to the licensing terms of the arXiv. Users should refer to the arXiv's terms of use for details on permissible usage.
- Downloads last month
- 14